merve HF Staff commited on
Commit
61d88db
·
verified ·
1 Parent(s): baceff0

Upload 6 files

Browse files
README.md CHANGED
@@ -1,3 +1,244 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ viewer: false
3
+ tags: [uv-script, object-detection]
4
+ ---
5
+
6
+ # Object Detection Dataset Scripts
7
+
8
+ 5 scripts to convert, validate, inspect, diff, and sample object detection datasets on the Hub. Supports 6 bbox formats — no setup required.
9
+ This repository is inspired by [panlabel](https://github.com/strickvl/panlabel)
10
+
11
+ ## Quick Start
12
+
13
+ Convert bounding box formats without cloning anything:
14
+
15
+ ```bash
16
+ # Convert COCO-style bboxes to YOLO normalized format
17
+ uv run convert-hf-dataset.py merve/coco-dataset merve/coco-yolo \
18
+ --from coco_xywh --to yolo --max-samples 100
19
+ ```
20
+
21
+ That's it! The script will:
22
+
23
+ - Load the dataset from the Hub
24
+ - Convert all bounding boxes in-place
25
+ - Push the result to a new dataset repo
26
+ - View results at: `https://huggingface.co/datasets/merve/coco-yolo`
27
+
28
+ ## Scripts
29
+
30
+ | Script | Description |
31
+ |--------|-------------|
32
+ | `convert-hf-dataset.py` | Convert between 6 bbox formats and push to Hub |
33
+ | `validate-hf-dataset.py` | Check annotations for errors (invalid bboxes, duplicates, bounds) |
34
+ | `stats-hf-dataset.py` | Compute statistics (counts, label histogram, area, co-occurrence) |
35
+ | `diff-hf-datasets.py` | Compare two datasets semantically (IoU-based annotation matching) |
36
+ | `sample-hf-dataset.py` | Create subsets (random or stratified) and push to Hub |
37
+
38
+ ## Supported Bbox Formats
39
+
40
+ All scripts support these 6 bounding box formats, matching the [panlabel](https://github.com/strickvl/panlabel) Rust CLI:
41
+
42
+ | Format | Encoding | Coordinate Space |
43
+ |--------|----------|------------------|
44
+ | `coco_xywh` | `[x, y, width, height]` | Pixels |
45
+ | `xyxy` | `[xmin, ymin, xmax, ymax]` | Pixels |
46
+ | `voc` | `[xmin, ymin, xmax, ymax]` | Pixels (alias for `xyxy`) |
47
+ | `yolo` | `[center_x, center_y, width, height]` | Normalized 0–1 |
48
+ | `tfod` | `[xmin, ymin, xmax, ymax]` | Normalized 0–1 |
49
+ | `label_studio` | `[x, y, width, height]` | Percentage 0–100 |
50
+
51
+ Conversions go through XYXY pixel-space as the intermediate representation, so any format can be converted to any other format.
52
+
53
+ ## Common Options
54
+
55
+ All scripts accept flexible column mapping. Datasets can store annotations as flat columns or nested under an `objects` dict — both layouts are handled automatically.
56
+
57
+ | Option | Description |
58
+ |--------|-------------|
59
+ | `--bbox-column` | Column containing bboxes (default: `bbox`) |
60
+ | `--category-column` | Column containing category labels (default: `category`) |
61
+ | `--width-column` | Column for image width (default: `width`) |
62
+ | `--height-column` | Column for image height (default: `height`) |
63
+ | `--split` | Dataset split (default: `train`) |
64
+ | `--max-samples` | Limit number of samples (useful for testing) |
65
+ | `--hf-token` | HF API token (or set `HF_TOKEN` env var) |
66
+ | `--private` | Make output dataset private |
67
+
68
+ Every script supports `--help` to see all available options:
69
+
70
+ ```bash
71
+ uv run convert-hf-dataset.py --help
72
+ ```
73
+
74
+ ## Convert (`convert-hf-dataset.py`)
75
+
76
+ Convert bounding boxes between any of the 6 supported formats:
77
+
78
+ ```bash
79
+ # COCO -> XYXY
80
+ uv run convert-hf-dataset.py merve/license-plates merve/license-plates-voc \
81
+ --from coco_xywh --to voc
82
+
83
+ # YOLO -> COCO
84
+ uv run convert-hf-dataset.py merve/license-plates merve/license-plates-yolo \
85
+ --from coco_xywh --to yolo
86
+
87
+ # TFOD (normalized xyxy) -> COCO
88
+ uv run convert-hf-dataset.py merve/license-plates-tfod merve/license-plates-coco \
89
+ --from tfod --to coco_xywh
90
+
91
+ # Label Studio (percentage xywh) -> XYXY
92
+ uv run convert-hf-dataset.py merve/ls-dataset merve/ls-xyxy \
93
+ --from label_studio --to xyxy
94
+
95
+ # Test on 10 samples first
96
+ uv run convert-hf-dataset.py merve/dataset merve/converted \
97
+ --from xyxy --to yolo --max-samples 10
98
+
99
+ # Shuffle before converting a subset
100
+ uv run convert-hf-dataset.py merve/dataset merve/converted \
101
+ --from coco_xywh --to tfod --max-samples 500 --shuffle
102
+ ```
103
+
104
+ | Option | Description |
105
+ |--------|-------------|
106
+ | `--from` | Source bbox format (required) |
107
+ | `--to` | Target bbox format (required) |
108
+ | `--batch-size` | Batch size for map (default: 1000) |
109
+ | `--create-pr` | Push as PR instead of direct commit |
110
+ | `--shuffle` | Shuffle dataset before processing |
111
+ | `--seed` | Random seed for shuffling (default: 42) |
112
+
113
+ ## Validate (`validate-hf-dataset.py`)
114
+
115
+ Check annotations for common issues:
116
+
117
+ ```bash
118
+ # Basic validation
119
+ uv run validate-hf-dataset.py merve/coco-dataset
120
+
121
+ # Validate YOLO-format dataset
122
+ uv run validate-hf-dataset.py merve/yolo-dataset --bbox-format yolo
123
+
124
+ # Validate TFOD-format dataset
125
+ uv run validate-hf-dataset.py merve/tfod-dataset --bbox-format tfod
126
+
127
+ # Strict mode (warnings become errors)
128
+ uv run validate-hf-dataset.py merve/dataset --strict
129
+
130
+ # JSON report
131
+ uv run validate-hf-dataset.py merve/dataset --report json
132
+
133
+ # Stream large datasets without full download
134
+ uv run validate-hf-dataset.py merve/huge-dataset --streaming --max-samples 5000
135
+
136
+ # Push validation report to Hub
137
+ uv run validate-hf-dataset.py merve/dataset --output-dataset merve/validation-report
138
+ ```
139
+
140
+ **Issue Codes:**
141
+
142
+ | Code | Level | Description |
143
+ |------|-------|-------------|
144
+ | E001 | Error | Bbox/category count mismatch |
145
+ | E002 | Error | Invalid bbox (missing values) |
146
+ | E003 | Error | Non-finite coordinates (NaN/Inf) |
147
+ | E004 | Error | xmin > xmax |
148
+ | E005 | Error | ymin > ymax |
149
+ | W001 | Warning | No annotations in example |
150
+ | W002 | Warning | Zero or negative area |
151
+ | W003 | Warning | Bbox before image origin |
152
+ | W004 | Warning | Bbox beyond image bounds |
153
+ | W005 | Warning | Empty category label |
154
+ | W006 | Warning | Duplicate file name |
155
+
156
+ ## Stats (`stats-hf-dataset.py`)
157
+
158
+ Compute rich statistics for a dataset:
159
+
160
+ ```bash
161
+ # Basic stats
162
+ uv run stats-hf-dataset.py merve/coco-dataset
163
+
164
+ # Top 20 label histogram, JSON output
165
+ uv run stats-hf-dataset.py merve/dataset --top 20 --report json
166
+
167
+ # Stats for TFOD-format dataset
168
+ uv run stats-hf-dataset.py merve/dataset --bbox-format tfod
169
+
170
+ # Stream large datasets
171
+ uv run stats-hf-dataset.py merve/huge-dataset --streaming --max-samples 10000
172
+
173
+ # Push stats report to Hub
174
+ uv run stats-hf-dataset.py merve/dataset --output-dataset merve/stats-report
175
+ ```
176
+
177
+ Reports include: summary counts, label distribution, annotation density, bbox area/aspect ratio distributions, per-category area stats, category co-occurrence pairs, and image resolution distribution.
178
+
179
+ ## Diff (`diff-hf-datasets.py`)
180
+
181
+ Compare two datasets semantically using IoU-based annotation matching:
182
+
183
+ ```bash
184
+ # Basic diff
185
+ uv run diff-hf-datasets.py merve/dataset-v1 merve/dataset-v2
186
+
187
+ # Stricter matching
188
+ uv run diff-hf-datasets.py merve/old merve/new --iou-threshold 0.7
189
+
190
+ # Per-annotation change details
191
+ uv run diff-hf-datasets.py merve/old merve/new --detail
192
+
193
+ # JSON report
194
+ uv run diff-hf-datasets.py merve/old merve/new --report json
195
+ ```
196
+
197
+ Reports include: shared/unique images, shared/unique categories, matched/added/removed/modified annotations.
198
+
199
+ ## Sample (`sample-hf-dataset.py`)
200
+
201
+ Create random or stratified subsets:
202
+
203
+ ```bash
204
+ # Random 500 samples
205
+ uv run sample-hf-dataset.py merve/dataset merve/subset -n 500
206
+
207
+ # 10% fraction
208
+ uv run sample-hf-dataset.py merve/dataset merve/subset --fraction 0.1
209
+
210
+ # Stratified sampling (preserves class distribution)
211
+ uv run sample-hf-dataset.py merve/dataset merve/subset \
212
+ -n 200 --strategy stratified
213
+
214
+ # Filter by categories
215
+ uv run sample-hf-dataset.py merve/dataset merve/subset \
216
+ -n 100 --categories "cat,dog,bird"
217
+
218
+ # Reproducible sampling
219
+ uv run sample-hf-dataset.py merve/dataset merve/subset \
220
+ -n 500 --seed 42
221
+ ```
222
+
223
+ | Option | Description |
224
+ |--------|-------------|
225
+ | `-n` | Number of samples to select |
226
+ | `--fraction` | Fraction of dataset (0.0–1.0) |
227
+ | `--strategy` | `random` (default) or `stratified` |
228
+ | `--categories` | Comma-separated list of categories to filter by |
229
+ | `--category-mode` | `images` (default) or `annotations` |
230
+
231
+ ## Run Locally
232
+
233
+ ```bash
234
+ # Clone and run
235
+ git clone https://huggingface.co/datasets/uv-scripts/panlabel
236
+ cd panlabel
237
+ uv run convert-hf-dataset.py input-dataset output-dataset --from coco_xywh --to yolo
238
+
239
+ # Or run directly from URL
240
+ uv run https://huggingface.co/datasets/uv-scripts/panlabel/raw/main/convert-hf-dataset.py \
241
+ input-dataset output-dataset --from coco_xywh --to yolo
242
+ ```
243
+
244
+ Works with any Hugging Face dataset containing object detection annotations — COCO, YOLO, VOC, TFOD, or Label Studio format.
convert-hf-dataset.py ADDED
@@ -0,0 +1,375 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets>=3.1.0",
5
+ # "huggingface-hub",
6
+ # "tqdm",
7
+ # "toolz",
8
+ # "Pillow",
9
+ # ]
10
+ # ///
11
+
12
+ """
13
+ Convert bounding box formats in a Hugging Face object detection dataset.
14
+
15
+ Mirrors panlabel's convert command. Converts between:
16
+ - COCO xywh: [x, y, width, height] in pixels
17
+ - XYXY: [xmin, ymin, xmax, ymax] in pixels
18
+ - VOC: [xmin, ymin, xmax, ymax] in pixels (alias for xyxy)
19
+ - YOLO: [center_x, center_y, width, height] normalized 0-1
20
+ - TFOD: [xmin, ymin, xmax, ymax] normalized 0-1
21
+ - Label Studio: [x, y, width, height] percentage 0-100
22
+
23
+ Reads from HF Hub, converts bboxes in-place, and pushes the result to a new
24
+ (or the same) dataset repo on HF Hub.
25
+
26
+ Examples:
27
+ uv run convert-hf-dataset.py merve/coco-dataset merve/coco-xyxy --from coco_xywh --to xyxy
28
+ uv run convert-hf-dataset.py merve/yolo-dataset merve/yolo-coco --from yolo --to coco_xywh
29
+ uv run convert-hf-dataset.py merve/dataset merve/converted --from xyxy --to yolo --max-samples 100
30
+ uv run convert-hf-dataset.py merve/dataset merve/converted --from tfod --to coco_xywh
31
+ uv run convert-hf-dataset.py merve/dataset merve/converted --from label_studio --to xyxy
32
+ """
33
+
34
+ import argparse
35
+ import json
36
+ import logging
37
+ import os
38
+ import sys
39
+ import time
40
+ from datetime import datetime
41
+ from typing import Any
42
+
43
+ from datasets import load_dataset
44
+ from huggingface_hub import DatasetCard, login
45
+ from toolz import partition_all
46
+ from tqdm.auto import tqdm
47
+
48
+ logging.basicConfig(level=logging.INFO)
49
+ logger = logging.getLogger(__name__)
50
+
51
+ BBOX_FORMATS = ["coco_xywh", "xyxy", "voc", "yolo", "tfod", "label_studio"]
52
+
53
+
54
+ def convert_bbox(
55
+ bbox: list[float],
56
+ from_fmt: str,
57
+ to_fmt: str,
58
+ img_w: float = 1.0,
59
+ img_h: float = 1.0,
60
+ ) -> list[float]:
61
+ """Convert a single bbox between formats via XYXY pixel-space intermediate."""
62
+ # Step 1: to XYXY pixel space
63
+ if from_fmt == "coco_xywh":
64
+ x, y, w, h = bbox[:4]
65
+ xmin, ymin, xmax, ymax = x, y, x + w, y + h
66
+ elif from_fmt in ("xyxy", "voc"):
67
+ xmin, ymin, xmax, ymax = bbox[:4]
68
+ elif from_fmt == "yolo":
69
+ cx, cy, w, h = bbox[:4]
70
+ xmin = (cx - w / 2) * img_w
71
+ ymin = (cy - h / 2) * img_h
72
+ xmax = (cx + w / 2) * img_w
73
+ ymax = (cy + h / 2) * img_h
74
+ elif from_fmt == "tfod":
75
+ xmin_n, ymin_n, xmax_n, ymax_n = bbox[:4]
76
+ xmin = xmin_n * img_w
77
+ ymin = ymin_n * img_h
78
+ xmax = xmax_n * img_w
79
+ ymax = ymax_n * img_h
80
+ elif from_fmt == "label_studio":
81
+ x_pct, y_pct, w_pct, h_pct = bbox[:4]
82
+ xmin = x_pct / 100.0 * img_w
83
+ ymin = y_pct / 100.0 * img_h
84
+ xmax = (x_pct + w_pct) / 100.0 * img_w
85
+ ymax = (y_pct + h_pct) / 100.0 * img_h
86
+ else:
87
+ raise ValueError(f"Unknown source format: {from_fmt}")
88
+
89
+ # Step 2: from XYXY pixel space to target
90
+ if to_fmt in ("xyxy", "voc"):
91
+ return [xmin, ymin, xmax, ymax]
92
+ elif to_fmt == "coco_xywh":
93
+ return [xmin, ymin, xmax - xmin, ymax - ymin]
94
+ elif to_fmt == "yolo":
95
+ if img_w <= 0 or img_h <= 0:
96
+ raise ValueError("YOLO format requires positive image dimensions")
97
+ w = xmax - xmin
98
+ h = ymax - ymin
99
+ cx = (xmin + w / 2) / img_w
100
+ cy = (ymin + h / 2) / img_h
101
+ return [cx, cy, w / img_w, h / img_h]
102
+ elif to_fmt == "tfod":
103
+ if img_w <= 0 or img_h <= 0:
104
+ raise ValueError("TFOD format requires positive image dimensions")
105
+ return [xmin / img_w, ymin / img_h, xmax / img_w, ymax / img_h]
106
+ elif to_fmt == "label_studio":
107
+ if img_w <= 0 or img_h <= 0:
108
+ raise ValueError("Label Studio format requires positive image dimensions")
109
+ x_pct = xmin / img_w * 100.0
110
+ y_pct = ymin / img_h * 100.0
111
+ w_pct = (xmax - xmin) / img_w * 100.0
112
+ h_pct = (ymax - ymin) / img_h * 100.0
113
+ return [x_pct, y_pct, w_pct, h_pct]
114
+ else:
115
+ raise ValueError(f"Unknown target format: {to_fmt}")
116
+
117
+
118
+ def convert_example(
119
+ example: dict[str, Any],
120
+ bbox_column: str,
121
+ from_fmt: str,
122
+ to_fmt: str,
123
+ width_column: str | None,
124
+ height_column: str | None,
125
+ ) -> dict[str, Any]:
126
+ """Convert bboxes in a single example."""
127
+ objects = example.get("objects")
128
+ is_nested = objects is not None and isinstance(objects, dict)
129
+
130
+ if is_nested:
131
+ bboxes = objects.get(bbox_column, []) or []
132
+ else:
133
+ bboxes = example.get(bbox_column, []) or []
134
+
135
+ img_w = 1.0
136
+ img_h = 1.0
137
+ if width_column:
138
+ img_w = float(example.get(width_column, 1.0) or 1.0)
139
+ if height_column:
140
+ img_h = float(example.get(height_column, 1.0) or 1.0)
141
+
142
+ converted = []
143
+ for bbox in bboxes:
144
+ if bbox is None or len(bbox) < 4:
145
+ converted.append(bbox)
146
+ continue
147
+ converted.append(convert_bbox(bbox, from_fmt, to_fmt, img_w, img_h))
148
+
149
+ if is_nested:
150
+ new_objects = dict(objects)
151
+ new_objects[bbox_column] = converted
152
+ return {"objects": new_objects}
153
+ else:
154
+ return {bbox_column: converted}
155
+
156
+
157
+ def create_dataset_card(
158
+ source_dataset: str,
159
+ output_dataset: str,
160
+ from_fmt: str,
161
+ to_fmt: str,
162
+ num_samples: int,
163
+ processing_time: str,
164
+ split: str,
165
+ ) -> str:
166
+ return f"""---
167
+ tags:
168
+ - object-detection
169
+ - bbox-conversion
170
+ - panlabel
171
+ - uv-script
172
+ - generated
173
+ ---
174
+
175
+ # Bbox Format Conversion: {from_fmt} -> {to_fmt}
176
+
177
+ Bounding boxes converted from `{from_fmt}` to `{to_fmt}` format.
178
+
179
+ ## Processing Details
180
+
181
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
182
+ - **Conversion**: `{from_fmt}` -> `{to_fmt}`
183
+ - **Number of Samples**: {num_samples:,}
184
+ - **Processing Time**: {processing_time}
185
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
186
+ - **Split**: `{split}`
187
+
188
+ ## Bbox Formats
189
+
190
+ | Format | Description |
191
+ |--------|-------------|
192
+ | `coco_xywh` | `[x, y, width, height]` in pixels |
193
+ | `xyxy` | `[xmin, ymin, xmax, ymax]` in pixels |
194
+ | `voc` | `[xmin, ymin, xmax, ymax]` in pixels (alias for xyxy) |
195
+ | `yolo` | `[center_x, center_y, width, height]` normalized 0-1 |
196
+ | `tfod` | `[xmin, ymin, xmax, ymax]` normalized 0-1 |
197
+ | `label_studio` | `[x, y, width, height]` percentage 0-100 |
198
+
199
+ ## Reproduction
200
+
201
+ ```bash
202
+ uv run convert-hf-dataset.py {source_dataset} {output_dataset} --from {from_fmt} --to {to_fmt}
203
+ ```
204
+
205
+ Generated with panlabel-hf (convert-hf-dataset.py)
206
+ """
207
+
208
+
209
+ def main(
210
+ input_dataset: str,
211
+ output_dataset: str,
212
+ from_fmt: str,
213
+ to_fmt: str,
214
+ bbox_column: str = "bbox",
215
+ width_column: str | None = "width",
216
+ height_column: str | None = "height",
217
+ split: str = "train",
218
+ max_samples: int | None = None,
219
+ batch_size: int = 1000,
220
+ hf_token: str | None = None,
221
+ private: bool = False,
222
+ create_pr: bool = False,
223
+ shuffle: bool = False,
224
+ seed: int = 42,
225
+ ):
226
+ """Convert bbox format in a HF dataset and push to Hub."""
227
+
228
+ if from_fmt == to_fmt:
229
+ logger.error(f"Source and target formats are the same: {from_fmt}")
230
+ sys.exit(1)
231
+
232
+ start_time = datetime.now()
233
+
234
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
235
+ if HF_TOKEN:
236
+ login(token=HF_TOKEN)
237
+
238
+ logger.info(f"Loading dataset: {input_dataset} (split={split})")
239
+ dataset = load_dataset(input_dataset, split=split)
240
+
241
+ if shuffle:
242
+ logger.info(f"Shuffling dataset with seed {seed}")
243
+ dataset = dataset.shuffle(seed=seed)
244
+
245
+ if max_samples:
246
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
247
+ logger.info(f"Limited to {len(dataset)} samples")
248
+
249
+ total_samples = len(dataset)
250
+ logger.info(f"Converting {total_samples:,} samples: {from_fmt} -> {to_fmt}")
251
+
252
+ # Convert using map
253
+ dataset = dataset.map(
254
+ lambda example: convert_example(
255
+ example, bbox_column, from_fmt, to_fmt, width_column, height_column
256
+ ),
257
+ desc=f"Converting {from_fmt} -> {to_fmt}",
258
+ )
259
+
260
+ processing_duration = datetime.now() - start_time
261
+ processing_time_str = f"{processing_duration.total_seconds():.1f}s"
262
+
263
+ # Add conversion metadata
264
+ conversion_info = json.dumps({
265
+ "source_format": from_fmt,
266
+ "target_format": to_fmt,
267
+ "source_dataset": input_dataset,
268
+ "timestamp": datetime.now().isoformat(),
269
+ "script": "convert-hf-dataset.py",
270
+ })
271
+
272
+ if "conversion_info" not in dataset.column_names:
273
+ dataset = dataset.add_column(
274
+ "conversion_info", [conversion_info] * len(dataset)
275
+ )
276
+
277
+ # Push to Hub
278
+ logger.info(f"Pushing to {output_dataset}")
279
+ max_retries = 3
280
+ for attempt in range(1, max_retries + 1):
281
+ try:
282
+ if attempt > 1:
283
+ logger.warning("Disabling XET (fallback to HTTP upload)")
284
+ os.environ["HF_HUB_DISABLE_XET"] = "1"
285
+ dataset.push_to_hub(
286
+ output_dataset,
287
+ private=private,
288
+ token=HF_TOKEN,
289
+ max_shard_size="500MB",
290
+ create_pr=create_pr,
291
+ )
292
+ break
293
+ except Exception as e:
294
+ logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
295
+ if attempt < max_retries:
296
+ delay = 30 * (2 ** (attempt - 1))
297
+ logger.info(f"Retrying in {delay}s...")
298
+ time.sleep(delay)
299
+ else:
300
+ logger.error("All upload attempts failed.")
301
+ sys.exit(1)
302
+
303
+ # Push dataset card
304
+ card_content = create_dataset_card(
305
+ source_dataset=input_dataset,
306
+ output_dataset=output_dataset,
307
+ from_fmt=from_fmt,
308
+ to_fmt=to_fmt,
309
+ num_samples=total_samples,
310
+ processing_time=processing_time_str,
311
+ split=split,
312
+ )
313
+ card = DatasetCard(card_content)
314
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
315
+
316
+ logger.info("Done!")
317
+ logger.info(f"Dataset: https://huggingface.co/datasets/{output_dataset}")
318
+ logger.info(f"Converted {total_samples:,} samples in {processing_time_str}")
319
+
320
+
321
+ if __name__ == "__main__":
322
+ parser = argparse.ArgumentParser(
323
+ description="Convert bbox formats in a HF object detection dataset",
324
+ formatter_class=argparse.RawDescriptionHelpFormatter,
325
+ epilog="""
326
+ Bbox formats:
327
+ coco_xywh [x, y, width, height] in pixels
328
+ xyxy [xmin, ymin, xmax, ymax] in pixels
329
+ voc [xmin, ymin, xmax, ymax] in pixels (alias for xyxy)
330
+ yolo [cx, cy, w, h] normalized 0-1
331
+ tfod [xmin, ymin, xmax, ymax] normalized 0-1
332
+ label_studio [x, y, w, h] percentage 0-100
333
+
334
+ Examples:
335
+ uv run convert-hf-dataset.py merve/coco merve/coco-xyxy --from coco_xywh --to xyxy
336
+ uv run convert-hf-dataset.py merve/yolo merve/yolo-coco --from yolo --to coco_xywh
337
+ uv run convert-hf-dataset.py merve/tfod merve/tfod-coco --from tfod --to coco_xywh
338
+ """,
339
+ )
340
+
341
+ parser.add_argument("input_dataset", help="Input dataset ID on HF Hub")
342
+ parser.add_argument("output_dataset", help="Output dataset ID on HF Hub")
343
+ parser.add_argument("--from", dest="from_fmt", required=True, choices=BBOX_FORMATS, help="Source bbox format")
344
+ parser.add_argument("--to", dest="to_fmt", required=True, choices=BBOX_FORMATS, help="Target bbox format")
345
+ parser.add_argument("--bbox-column", default="bbox", help="Column containing bboxes (default: bbox)")
346
+ parser.add_argument("--width-column", default="width", help="Column for image width (default: width)")
347
+ parser.add_argument("--height-column", default="height", help="Column for image height (default: height)")
348
+ parser.add_argument("--split", default="train", help="Dataset split (default: train)")
349
+ parser.add_argument("--max-samples", type=int, help="Max samples to process")
350
+ parser.add_argument("--batch-size", type=int, default=1000, help="Batch size for map (default: 1000)")
351
+ parser.add_argument("--hf-token", help="HF API token")
352
+ parser.add_argument("--private", action="store_true", help="Make output dataset private")
353
+ parser.add_argument("--create-pr", action="store_true", help="Create PR instead of direct push")
354
+ parser.add_argument("--shuffle", action="store_true", help="Shuffle dataset before processing")
355
+ parser.add_argument("--seed", type=int, default=42, help="Random seed (default: 42)")
356
+
357
+ args = parser.parse_args()
358
+
359
+ main(
360
+ input_dataset=args.input_dataset,
361
+ output_dataset=args.output_dataset,
362
+ from_fmt=args.from_fmt,
363
+ to_fmt=args.to_fmt,
364
+ bbox_column=args.bbox_column,
365
+ width_column=args.width_column,
366
+ height_column=args.height_column,
367
+ split=args.split,
368
+ max_samples=args.max_samples,
369
+ batch_size=args.batch_size,
370
+ hf_token=args.hf_token,
371
+ private=args.private,
372
+ create_pr=args.create_pr,
373
+ shuffle=args.shuffle,
374
+ seed=args.seed,
375
+ )
diff-hf-datasets.py ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets>=3.1.0",
5
+ # "huggingface-hub",
6
+ # "tqdm",
7
+ # "Pillow",
8
+ # ]
9
+ # ///
10
+
11
+ """
12
+ Semantic diff between two object detection datasets on Hugging Face Hub.
13
+
14
+ Mirrors panlabel's diff command. Compares two dataset versions and reports:
15
+
16
+ - Images shared / only-in-A / only-in-B
17
+ - Categories shared / only-in-A / only-in-B
18
+ - Annotations added / removed / modified
19
+ - Bbox geometry changes (IoU-based matching)
20
+
21
+ Matching strategies:
22
+ - ID-based: Match images by file_name or image_id column
23
+ - For annotations within shared images, match by IoU threshold
24
+
25
+ Examples:
26
+ uv run diff-hf-datasets.py merve/dataset-v1 merve/dataset-v2
27
+ uv run diff-hf-datasets.py merve/old merve/new --iou-threshold 0.7 --detail
28
+ uv run diff-hf-datasets.py merve/old merve/new --report json
29
+ """
30
+
31
+ import argparse
32
+ import json
33
+ import logging
34
+ import math
35
+ import os
36
+ import sys
37
+ from collections import Counter, defaultdict
38
+ from datetime import datetime
39
+ from typing import Any
40
+
41
+ from datasets import load_dataset
42
+ from huggingface_hub import login
43
+ from tqdm.auto import tqdm
44
+
45
+ logging.basicConfig(level=logging.INFO)
46
+ logger = logging.getLogger(__name__)
47
+
48
+ BBOX_FORMATS = ["coco_xywh", "xyxy", "voc", "yolo", "tfod", "label_studio"]
49
+
50
+
51
+ def to_xyxy(bbox: list[float], fmt: str, img_w: float = 1.0, img_h: float = 1.0) -> tuple[float, float, float, float]:
52
+ if fmt == "coco_xywh":
53
+ x, y, w, h = bbox
54
+ return (x, y, x + w, y + h)
55
+ elif fmt in ("xyxy", "voc"):
56
+ return tuple(bbox[:4])
57
+ elif fmt == "yolo":
58
+ cx, cy, w, h = bbox
59
+ return (cx - w / 2) * img_w, (cy - h / 2) * img_h, (cx + w / 2) * img_w, (cy + h / 2) * img_h
60
+ elif fmt == "tfod":
61
+ xmin_n, ymin_n, xmax_n, ymax_n = bbox
62
+ return (xmin_n * img_w, ymin_n * img_h, xmax_n * img_w, ymax_n * img_h)
63
+ elif fmt == "label_studio":
64
+ x_pct, y_pct, w_pct, h_pct = bbox
65
+ return (
66
+ x_pct / 100.0 * img_w,
67
+ y_pct / 100.0 * img_h,
68
+ (x_pct + w_pct) / 100.0 * img_w,
69
+ (y_pct + h_pct) / 100.0 * img_h,
70
+ )
71
+ else:
72
+ raise ValueError(f"Unknown bbox format: {fmt}")
73
+
74
+
75
+ def compute_iou(box_a: tuple, box_b: tuple) -> float:
76
+ """Compute IoU between two XYXY boxes."""
77
+ xa = max(box_a[0], box_b[0])
78
+ ya = max(box_a[1], box_b[1])
79
+ xb = min(box_a[2], box_b[2])
80
+ yb = min(box_a[3], box_b[3])
81
+
82
+ inter = max(0, xb - xa) * max(0, yb - ya)
83
+ area_a = (box_a[2] - box_a[0]) * (box_a[3] - box_a[1])
84
+ area_b = (box_b[2] - box_b[0]) * (box_b[3] - box_b[1])
85
+ union = area_a + area_b - inter
86
+
87
+ if union <= 0:
88
+ return 0.0
89
+ return inter / union
90
+
91
+
92
+ def extract_annotations(
93
+ example: dict[str, Any],
94
+ bbox_column: str,
95
+ category_column: str,
96
+ bbox_format: str,
97
+ width_column: str | None,
98
+ height_column: str | None,
99
+ ) -> list[dict]:
100
+ """Extract annotations from example as list of {bbox_xyxy, category}."""
101
+ objects = example.get("objects", example)
102
+ bboxes = objects.get(bbox_column, []) or []
103
+ categories = objects.get(category_column, []) or []
104
+
105
+ img_w = 1.0
106
+ img_h = 1.0
107
+ if width_column:
108
+ img_w = float(example.get(width_column, 1.0) or 1.0)
109
+ if height_column:
110
+ img_h = float(example.get(height_column, 1.0) or 1.0)
111
+
112
+ anns = []
113
+ for i, bbox in enumerate(bboxes):
114
+ if bbox is None or len(bbox) < 4:
115
+ continue
116
+ if not all(math.isfinite(v) for v in bbox[:4]):
117
+ continue
118
+ xyxy = to_xyxy(bbox[:4], bbox_format, img_w, img_h)
119
+ cat = str(categories[i]) if i < len(categories) else "<unknown>"
120
+ anns.append({"bbox_xyxy": xyxy, "category": cat})
121
+
122
+ return anns
123
+
124
+
125
+ def match_annotations_iou(
126
+ anns_a: list[dict],
127
+ anns_b: list[dict],
128
+ iou_threshold: float,
129
+ ) -> tuple[list[tuple[int, int, float]], list[int], list[int]]:
130
+ """Greedy IoU matching. Returns (matched_pairs, unmatched_a, unmatched_b)."""
131
+ if not anns_a or not anns_b:
132
+ return [], list(range(len(anns_a))), list(range(len(anns_b)))
133
+
134
+ # Compute all pairwise IoUs
135
+ pairs = []
136
+ for i, a in enumerate(anns_a):
137
+ for j, b in enumerate(anns_b):
138
+ iou = compute_iou(a["bbox_xyxy"], b["bbox_xyxy"])
139
+ if iou >= iou_threshold:
140
+ pairs.append((iou, i, j))
141
+
142
+ pairs.sort(reverse=True)
143
+
144
+ matched_a = set()
145
+ matched_b = set()
146
+ matches = []
147
+
148
+ for iou, i, j in pairs:
149
+ if i not in matched_a and j not in matched_b:
150
+ matches.append((i, j, iou))
151
+ matched_a.add(i)
152
+ matched_b.add(j)
153
+
154
+ unmatched_a = [i for i in range(len(anns_a)) if i not in matched_a]
155
+ unmatched_b = [j for j in range(len(anns_b)) if j not in matched_b]
156
+
157
+ return matches, unmatched_a, unmatched_b
158
+
159
+
160
+ def get_image_key(example: dict, id_column: str) -> str:
161
+ """Get a unique key for an image example."""
162
+ val = example.get(id_column)
163
+ if val is not None:
164
+ return str(val)
165
+ return str(example.get("file_name", ""))
166
+
167
+
168
+ def main(
169
+ dataset_a: str,
170
+ dataset_b: str,
171
+ bbox_column: str = "bbox",
172
+ category_column: str = "category",
173
+ bbox_format: str = "coco_xywh",
174
+ id_column: str = "image_id",
175
+ width_column: str | None = "width",
176
+ height_column: str | None = "height",
177
+ split: str = "train",
178
+ max_samples: int | None = None,
179
+ iou_threshold: float = 0.5,
180
+ detail: bool = False,
181
+ report_format: str = "text",
182
+ hf_token: str | None = None,
183
+ ):
184
+ """Compare two object detection datasets semantically."""
185
+
186
+ start_time = datetime.now()
187
+
188
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
189
+ if HF_TOKEN:
190
+ login(token=HF_TOKEN)
191
+
192
+ logger.info(f"Loading dataset A: {dataset_a}")
193
+ ds_a = load_dataset(dataset_a, split=split)
194
+ logger.info(f"Loading dataset B: {dataset_b}")
195
+ ds_b = load_dataset(dataset_b, split=split)
196
+
197
+ if max_samples:
198
+ ds_a = ds_a.select(range(min(max_samples, len(ds_a))))
199
+ ds_b = ds_b.select(range(min(max_samples, len(ds_b))))
200
+
201
+ # Index by image key
202
+ logger.info("Indexing images...")
203
+ index_a = {}
204
+ for i in tqdm(range(len(ds_a)), desc="Indexing A"):
205
+ ex = ds_a[i]
206
+ key = get_image_key(ex, id_column)
207
+ index_a[key] = ex
208
+
209
+ index_b = {}
210
+ for i in tqdm(range(len(ds_b)), desc="Indexing B"):
211
+ ex = ds_b[i]
212
+ key = get_image_key(ex, id_column)
213
+ index_b[key] = ex
214
+
215
+ keys_a = set(index_a.keys())
216
+ keys_b = set(index_b.keys())
217
+
218
+ shared_keys = keys_a & keys_b
219
+ only_a_keys = keys_a - keys_b
220
+ only_b_keys = keys_b - keys_a
221
+
222
+ # Collect all categories
223
+ cats_a = set()
224
+ cats_b = set()
225
+ total_added = 0
226
+ total_removed = 0
227
+ total_modified = 0
228
+ total_matched = 0
229
+ detail_records = []
230
+
231
+ logger.info(f"Comparing {len(shared_keys)} shared images...")
232
+ for key in tqdm(sorted(shared_keys), desc="Diffing"):
233
+ ex_a = index_a[key]
234
+ ex_b = index_b[key]
235
+
236
+ anns_a = extract_annotations(ex_a, bbox_column, category_column, bbox_format, width_column, height_column)
237
+ anns_b = extract_annotations(ex_b, bbox_column, category_column, bbox_format, width_column, height_column)
238
+
239
+ for a in anns_a:
240
+ cats_a.add(a["category"])
241
+ for b in anns_b:
242
+ cats_b.add(b["category"])
243
+
244
+ matches, unmatched_a, unmatched_b = match_annotations_iou(anns_a, anns_b, iou_threshold)
245
+
246
+ total_matched += len(matches)
247
+ total_removed += len(unmatched_a)
248
+ total_added += len(unmatched_b)
249
+
250
+ # Check for category changes in matched pairs
251
+ for i, j, iou in matches:
252
+ if anns_a[i]["category"] != anns_b[j]["category"]:
253
+ total_modified += 1
254
+ if detail:
255
+ detail_records.append({
256
+ "image": key,
257
+ "type": "modified",
258
+ "from_category": anns_a[i]["category"],
259
+ "to_category": anns_b[j]["category"],
260
+ "iou": round(iou, 3),
261
+ })
262
+
263
+ if detail:
264
+ for idx in unmatched_a:
265
+ detail_records.append({
266
+ "image": key,
267
+ "type": "removed",
268
+ "category": anns_a[idx]["category"],
269
+ "bbox": list(anns_a[idx]["bbox_xyxy"]),
270
+ })
271
+ for idx in unmatched_b:
272
+ detail_records.append({
273
+ "image": key,
274
+ "type": "added",
275
+ "category": anns_b[idx]["category"],
276
+ "bbox": list(anns_b[idx]["bbox_xyxy"]),
277
+ })
278
+
279
+ # Count annotations in only-A and only-B images
280
+ anns_only_a = 0
281
+ for key in only_a_keys:
282
+ anns = extract_annotations(index_a[key], bbox_column, category_column, bbox_format, width_column, height_column)
283
+ anns_only_a += len(anns)
284
+ for a in anns:
285
+ cats_a.add(a["category"])
286
+
287
+ anns_only_b = 0
288
+ for key in only_b_keys:
289
+ anns = extract_annotations(index_b[key], bbox_column, category_column, bbox_format, width_column, height_column)
290
+ anns_only_b += len(anns)
291
+ for b in anns:
292
+ cats_b.add(b["category"])
293
+
294
+ shared_cats = cats_a & cats_b
295
+ only_a_cats = cats_a - cats_b
296
+ only_b_cats = cats_b - cats_a
297
+
298
+ processing_time = datetime.now() - start_time
299
+
300
+ report = {
301
+ "dataset_a": dataset_a,
302
+ "dataset_b": dataset_b,
303
+ "split": split,
304
+ "iou_threshold": iou_threshold,
305
+ "images": {
306
+ "in_a": len(keys_a),
307
+ "in_b": len(keys_b),
308
+ "shared": len(shared_keys),
309
+ "only_in_a": len(only_a_keys),
310
+ "only_in_b": len(only_b_keys),
311
+ },
312
+ "categories": {
313
+ "in_a": len(cats_a),
314
+ "in_b": len(cats_b),
315
+ "shared": len(shared_cats),
316
+ "only_in_a": sorted(only_a_cats),
317
+ "only_in_b": sorted(only_b_cats),
318
+ },
319
+ "annotations": {
320
+ "matched": total_matched,
321
+ "modified": total_modified,
322
+ "added_in_shared_images": total_added,
323
+ "removed_in_shared_images": total_removed,
324
+ "in_only_a_images": anns_only_a,
325
+ "in_only_b_images": anns_only_b,
326
+ },
327
+ "processing_time_seconds": processing_time.total_seconds(),
328
+ }
329
+
330
+ if detail:
331
+ report["details"] = detail_records
332
+
333
+ if report_format == "json":
334
+ print(json.dumps(report, indent=2))
335
+ else:
336
+ print("\n" + "=" * 60)
337
+ print(f"Dataset Diff")
338
+ print(f" A: {dataset_a}")
339
+ print(f" B: {dataset_b}")
340
+ print("=" * 60)
341
+
342
+ img = report["images"]
343
+ print(f"\n Images:")
344
+ print(f" A: {img['in_a']:,} | B: {img['in_b']:,}")
345
+ print(f" Shared: {img['shared']:,}")
346
+ print(f" Only in A: {img['only_in_a']:,}")
347
+ print(f" Only in B: {img['only_in_b']:,}")
348
+
349
+ cat = report["categories"]
350
+ print(f"\n Categories:")
351
+ print(f" A: {cat['in_a']} | B: {cat['in_b']} | Shared: {cat['shared']}")
352
+ if cat["only_in_a"]:
353
+ print(f" Only in A: {', '.join(cat['only_in_a'][:10])}")
354
+ if cat["only_in_b"]:
355
+ print(f" Only in B: {', '.join(cat['only_in_b'][:10])}")
356
+
357
+ ann = report["annotations"]
358
+ print(f"\n Annotations (IoU >= {iou_threshold}):")
359
+ print(f" Matched: {ann['matched']:,}")
360
+ print(f" Modified: {ann['modified']:,} (category changed)")
361
+ print(f" Added: {ann['added_in_shared_images']:,} (in shared images)")
362
+ print(f" Removed: {ann['removed_in_shared_images']:,} (in shared images)")
363
+ if ann["in_only_a_images"]:
364
+ print(f" In A-only images: {ann['in_only_a_images']:,}")
365
+ if ann["in_only_b_images"]:
366
+ print(f" In B-only images: {ann['in_only_b_images']:,}")
367
+
368
+ if detail and detail_records:
369
+ print(f"\n Detail ({len(detail_records)} changes):")
370
+ for rec in detail_records[:20]:
371
+ if rec["type"] == "modified":
372
+ print(f" [{rec['image']}] {rec['from_category']} -> {rec['to_category']} (IoU={rec['iou']})")
373
+ elif rec["type"] == "added":
374
+ print(f" [{rec['image']}] + {rec['category']}")
375
+ elif rec["type"] == "removed":
376
+ print(f" [{rec['image']}] - {rec['category']}")
377
+ if len(detail_records) > 20:
378
+ print(f" ... and {len(detail_records) - 20} more")
379
+
380
+ print(f"\n Processing time: {processing_time.total_seconds():.1f}s")
381
+ print("=" * 60)
382
+
383
+
384
+ if __name__ == "__main__":
385
+ parser = argparse.ArgumentParser(
386
+ description="Semantic diff between two object detection datasets on HF Hub",
387
+ formatter_class=argparse.RawDescriptionHelpFormatter,
388
+ epilog="""
389
+ Examples:
390
+ uv run diff-hf-datasets.py merve/dataset-v1 merve/dataset-v2
391
+ uv run diff-hf-datasets.py merve/old merve/new --iou-threshold 0.7 --detail
392
+ uv run diff-hf-datasets.py merve/old merve/new --report json
393
+ """,
394
+ )
395
+
396
+ parser.add_argument("dataset_a", help="First dataset ID (A)")
397
+ parser.add_argument("dataset_b", help="Second dataset ID (B)")
398
+ parser.add_argument("--bbox-column", default="bbox", help="Column containing bboxes (default: bbox)")
399
+ parser.add_argument("--category-column", default="category", help="Column containing categories (default: category)")
400
+ parser.add_argument("--bbox-format", choices=BBOX_FORMATS, default="coco_xywh", help="Bbox format (default: coco_xywh)")
401
+ parser.add_argument("--id-column", default="image_id", help="Column to match images by (default: image_id)")
402
+ parser.add_argument("--width-column", default="width", help="Column for image width (default: width)")
403
+ parser.add_argument("--height-column", default="height", help="Column for image height (default: height)")
404
+ parser.add_argument("--split", default="train", help="Dataset split (default: train)")
405
+ parser.add_argument("--max-samples", type=int, help="Max samples per dataset")
406
+ parser.add_argument("--iou-threshold", type=float, default=0.5, help="IoU threshold for matching (default: 0.5)")
407
+ parser.add_argument("--detail", action="store_true", help="Show per-annotation changes")
408
+ parser.add_argument("--report", choices=["text", "json"], default="text", help="Report format (default: text)")
409
+ parser.add_argument("--hf-token", help="HF API token")
410
+
411
+ args = parser.parse_args()
412
+
413
+ main(
414
+ dataset_a=args.dataset_a,
415
+ dataset_b=args.dataset_b,
416
+ bbox_column=args.bbox_column,
417
+ category_column=args.category_column,
418
+ bbox_format=args.bbox_format,
419
+ id_column=args.id_column,
420
+ width_column=args.width_column,
421
+ height_column=args.height_column,
422
+ split=args.split,
423
+ max_samples=args.max_samples,
424
+ iou_threshold=args.iou_threshold,
425
+ detail=args.detail,
426
+ report_format=args.report,
427
+ hf_token=args.hf_token,
428
+ )
sample-hf-dataset.py ADDED
@@ -0,0 +1,341 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets>=3.1.0",
5
+ # "huggingface-hub",
6
+ # "tqdm",
7
+ # "Pillow",
8
+ # ]
9
+ # ///
10
+
11
+ """
12
+ Create random or stratified subsets of object detection datasets on HF Hub.
13
+
14
+ Mirrors panlabel's sample command. Supports:
15
+
16
+ - Random sampling: Uniform random selection of N images or a fraction
17
+ - Stratified sampling: Category-aware weighted sampling to preserve class distribution
18
+ - Category filtering: Select only images containing specific categories
19
+ - Category mode: Filter by image-level or annotation-level membership
20
+
21
+ Pushes the resulting subset to a new dataset repo on HF Hub.
22
+
23
+ Examples:
24
+ uv run sample-hf-dataset.py merve/dataset merve/subset -n 500
25
+ uv run sample-hf-dataset.py merve/dataset merve/subset --fraction 0.1
26
+ uv run sample-hf-dataset.py merve/dataset merve/subset -n 200 --strategy stratified
27
+ uv run sample-hf-dataset.py merve/dataset merve/subset -n 100 --categories "cat,dog,bird"
28
+ """
29
+
30
+ import argparse
31
+ import json
32
+ import logging
33
+ import os
34
+ import random
35
+ import sys
36
+ import time
37
+ from collections import Counter, defaultdict
38
+ from datetime import datetime
39
+ from typing import Any
40
+
41
+ from datasets import load_dataset
42
+ from huggingface_hub import DatasetCard, login
43
+ from tqdm.auto import tqdm
44
+
45
+ logging.basicConfig(level=logging.INFO)
46
+ logger = logging.getLogger(__name__)
47
+
48
+
49
+ def get_image_categories(
50
+ example: dict[str, Any],
51
+ category_column: str,
52
+ ) -> list[str]:
53
+ """Get list of category labels from an example."""
54
+ objects = example.get("objects", example)
55
+ categories = objects.get(category_column, []) or []
56
+ return [str(c) for c in categories if c is not None]
57
+
58
+
59
+ def create_dataset_card(
60
+ source_dataset: str,
61
+ output_dataset: str,
62
+ strategy: str,
63
+ num_samples: int,
64
+ original_size: int,
65
+ categories_filter: list[str] | None,
66
+ category_mode: str,
67
+ seed: int,
68
+ split: str,
69
+ ) -> str:
70
+ fraction = num_samples / original_size if original_size > 0 else 0
71
+ filter_str = f"\n- **Category Filter**: {', '.join(categories_filter)}" if categories_filter else ""
72
+ return f"""---
73
+ tags:
74
+ - object-detection
75
+ - dataset-subset
76
+ - panlabel
77
+ - uv-script
78
+ - generated
79
+ ---
80
+
81
+ # Dataset Subset: {strategy} sampling
82
+
83
+ A {strategy} subset of [{source_dataset}](https://huggingface.co/datasets/{source_dataset}).
84
+
85
+ ## Details
86
+
87
+ - **Source**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
88
+ - **Strategy**: {strategy}
89
+ - **Samples**: {num_samples:,} / {original_size:,} ({fraction:.1%})
90
+ - **Seed**: {seed}
91
+ - **Split**: `{split}`
92
+ - **Category Mode**: {category_mode}{filter_str}
93
+ - **Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
94
+
95
+ ## Reproduction
96
+
97
+ ```bash
98
+ uv run sample-hf-dataset.py {source_dataset} {output_dataset} \\
99
+ -n {num_samples} --strategy {strategy} --seed {seed}
100
+ ```
101
+
102
+ Generated with panlabel-hf (sample-hf-dataset.py)
103
+ """
104
+
105
+
106
+ def main(
107
+ input_dataset: str,
108
+ output_dataset: str,
109
+ n: int | None = None,
110
+ fraction: float | None = None,
111
+ strategy: str = "random",
112
+ category_column: str = "category",
113
+ categories: list[str] | None = None,
114
+ category_mode: str = "images",
115
+ split: str = "train",
116
+ seed: int = 42,
117
+ hf_token: str | None = None,
118
+ private: bool = False,
119
+ create_pr: bool = False,
120
+ ):
121
+ """Create a subset of an object detection dataset and push to Hub."""
122
+
123
+ start_time = datetime.now()
124
+
125
+ if n is None and fraction is None:
126
+ logger.error("Must specify either -n (count) or --fraction")
127
+ sys.exit(1)
128
+
129
+ if n is not None and fraction is not None:
130
+ logger.error("Specify only one of -n or --fraction, not both")
131
+ sys.exit(1)
132
+
133
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
134
+ if HF_TOKEN:
135
+ login(token=HF_TOKEN)
136
+
137
+ logger.info(f"Loading dataset: {input_dataset} (split={split})")
138
+ dataset = load_dataset(input_dataset, split=split)
139
+ original_size = len(dataset)
140
+ logger.info(f"Loaded {original_size:,} examples")
141
+
142
+ # Determine target count
143
+ if fraction is not None:
144
+ target_n = max(1, int(original_size * fraction))
145
+ logger.info(f"Fraction {fraction} -> {target_n:,} samples")
146
+ else:
147
+ target_n = min(n, original_size)
148
+
149
+ rng = random.Random(seed)
150
+
151
+ # Category filtering
152
+ if categories:
153
+ logger.info(f"Filtering by categories: {categories} (mode={category_mode})")
154
+ keep_indices = []
155
+ for idx in tqdm(range(original_size), desc="Filtering"):
156
+ ex = dataset[idx]
157
+ img_cats = get_image_categories(ex, category_column)
158
+ if category_mode == "images":
159
+ # Keep image if ANY of its annotations match
160
+ if any(c in categories for c in img_cats):
161
+ keep_indices.append(idx)
162
+ else: # annotations mode — just check presence, filtering happens below
163
+ if any(c in categories for c in img_cats):
164
+ keep_indices.append(idx)
165
+
166
+ dataset = dataset.select(keep_indices)
167
+ logger.info(f"After category filter: {len(dataset):,} examples")
168
+ target_n = min(target_n, len(dataset))
169
+
170
+ if strategy == "random":
171
+ logger.info(f"Random sampling {target_n:,} from {len(dataset):,}")
172
+ indices = list(range(len(dataset)))
173
+ rng.shuffle(indices)
174
+ selected = sorted(indices[:target_n])
175
+ dataset = dataset.select(selected)
176
+
177
+ elif strategy == "stratified":
178
+ logger.info(f"Stratified sampling {target_n:,} from {len(dataset):,}")
179
+
180
+ # Count categories per image and build index
181
+ cat_to_images = defaultdict(list)
182
+ for idx in tqdm(range(len(dataset)), desc="Indexing categories"):
183
+ ex = dataset[idx]
184
+ img_cats = set(get_image_categories(ex, category_column))
185
+ for cat in img_cats:
186
+ cat_to_images[cat].append(idx)
187
+
188
+ # Compute per-category allocation proportional to frequency
189
+ total_cat_count = sum(len(imgs) for imgs in cat_to_images.values())
190
+ cat_allocations = {}
191
+ for cat, imgs in cat_to_images.items():
192
+ cat_allocations[cat] = max(1, round(target_n * len(imgs) / total_cat_count))
193
+
194
+ # Greedy selection: pick from underrepresented categories first
195
+ selected = set()
196
+ cat_fulfilled = Counter()
197
+
198
+ # Sort categories by allocation (smallest first for better representation)
199
+ sorted_cats = sorted(cat_allocations.keys(), key=lambda c: cat_allocations[c])
200
+
201
+ for cat in sorted_cats:
202
+ needed = cat_allocations[cat] - cat_fulfilled[cat]
203
+ if needed <= 0:
204
+ continue
205
+
206
+ available = [i for i in cat_to_images[cat] if i not in selected]
207
+ rng.shuffle(available)
208
+ pick = available[:needed]
209
+ selected.update(pick)
210
+
211
+ # Update fulfilled counts for all categories of picked images
212
+ for idx in pick:
213
+ ex = dataset[idx]
214
+ for c in set(get_image_categories(ex, category_column)):
215
+ cat_fulfilled[c] += 1
216
+
217
+ # If we still need more, fill randomly
218
+ if len(selected) < target_n:
219
+ remaining = [i for i in range(len(dataset)) if i not in selected]
220
+ rng.shuffle(remaining)
221
+ selected.update(remaining[: target_n - len(selected)])
222
+
223
+ # If we have too many, trim
224
+ selected_list = sorted(selected)
225
+ if len(selected_list) > target_n:
226
+ rng.shuffle(selected_list)
227
+ selected_list = sorted(selected_list[:target_n])
228
+
229
+ dataset = dataset.select(selected_list)
230
+ logger.info(f"Selected {len(dataset):,} samples via stratified sampling")
231
+
232
+ else:
233
+ logger.error(f"Unknown strategy: {strategy}")
234
+ sys.exit(1)
235
+
236
+ num_samples = len(dataset)
237
+ processing_duration = datetime.now() - start_time
238
+ processing_time_str = f"{processing_duration.total_seconds():.1f}s"
239
+
240
+ # Push to Hub
241
+ logger.info(f"Pushing {num_samples:,} samples to {output_dataset}")
242
+ max_retries = 3
243
+ for attempt in range(1, max_retries + 1):
244
+ try:
245
+ if attempt > 1:
246
+ logger.warning("Disabling XET (fallback to HTTP upload)")
247
+ os.environ["HF_HUB_DISABLE_XET"] = "1"
248
+ dataset.push_to_hub(
249
+ output_dataset,
250
+ private=private,
251
+ token=HF_TOKEN,
252
+ max_shard_size="500MB",
253
+ create_pr=create_pr,
254
+ )
255
+ break
256
+ except Exception as e:
257
+ logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
258
+ if attempt < max_retries:
259
+ delay = 30 * (2 ** (attempt - 1))
260
+ logger.info(f"Retrying in {delay}s...")
261
+ time.sleep(delay)
262
+ else:
263
+ logger.error("All upload attempts failed.")
264
+ sys.exit(1)
265
+
266
+ # Push dataset card
267
+ card_content = create_dataset_card(
268
+ source_dataset=input_dataset,
269
+ output_dataset=output_dataset,
270
+ strategy=strategy,
271
+ num_samples=num_samples,
272
+ original_size=original_size,
273
+ categories_filter=categories,
274
+ category_mode=category_mode,
275
+ seed=seed,
276
+ split=split,
277
+ )
278
+ card = DatasetCard(card_content)
279
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
280
+
281
+ logger.info("Done!")
282
+ logger.info(f"Dataset: https://huggingface.co/datasets/{output_dataset}")
283
+ logger.info(f"Sampled {num_samples:,} / {original_size:,} in {processing_time_str}")
284
+
285
+
286
+ if __name__ == "__main__":
287
+ parser = argparse.ArgumentParser(
288
+ description="Create random or stratified subsets of HF object detection datasets",
289
+ formatter_class=argparse.RawDescriptionHelpFormatter,
290
+ epilog="""
291
+ Strategies:
292
+ random Uniform random selection (default)
293
+ stratified Category-aware weighted sampling
294
+
295
+ Category modes (with --categories):
296
+ images Keep images containing any matching annotation (default)
297
+ annotations Keep images containing any matching annotation
298
+
299
+ Examples:
300
+ uv run sample-hf-dataset.py merve/dataset merve/subset -n 500
301
+ uv run sample-hf-dataset.py merve/dataset merve/subset --fraction 0.1
302
+ uv run sample-hf-dataset.py merve/dataset merve/subset -n 200 --strategy stratified
303
+ uv run sample-hf-dataset.py merve/dataset merve/subset -n 100 --categories "cat,dog"
304
+ """,
305
+ )
306
+
307
+ parser.add_argument("input_dataset", help="Input dataset ID on HF Hub")
308
+ parser.add_argument("output_dataset", help="Output dataset ID on HF Hub")
309
+ parser.add_argument("-n", type=int, help="Number of samples to select")
310
+ parser.add_argument("--fraction", type=float, help="Fraction of dataset to select (0.0-1.0)")
311
+ parser.add_argument("--strategy", choices=["random", "stratified"], default="random", help="Sampling strategy (default: random)")
312
+ parser.add_argument("--category-column", default="category", help="Column containing categories (default: category)")
313
+ parser.add_argument("--categories", help="Comma-separated list of categories to filter by")
314
+ parser.add_argument("--category-mode", choices=["images", "annotations"], default="images", help="How to apply category filter (default: images)")
315
+ parser.add_argument("--split", default="train", help="Dataset split (default: train)")
316
+ parser.add_argument("--seed", type=int, default=42, help="Random seed (default: 42)")
317
+ parser.add_argument("--hf-token", help="HF API token")
318
+ parser.add_argument("--private", action="store_true", help="Make output dataset private")
319
+ parser.add_argument("--create-pr", action="store_true", help="Create PR instead of direct push")
320
+
321
+ args = parser.parse_args()
322
+
323
+ cats = None
324
+ if args.categories:
325
+ cats = [c.strip() for c in args.categories.split(",")]
326
+
327
+ main(
328
+ input_dataset=args.input_dataset,
329
+ output_dataset=args.output_dataset,
330
+ n=args.n,
331
+ fraction=args.fraction,
332
+ strategy=args.strategy,
333
+ category_column=args.category_column,
334
+ categories=cats,
335
+ category_mode=args.category_mode,
336
+ split=args.split,
337
+ seed=args.seed,
338
+ hf_token=args.hf_token,
339
+ private=args.private,
340
+ create_pr=args.create_pr,
341
+ )
stats-hf-dataset.py ADDED
@@ -0,0 +1,402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets>=3.1.0",
5
+ # "huggingface-hub",
6
+ # "tqdm",
7
+ # "Pillow",
8
+ # ]
9
+ # ///
10
+
11
+ """
12
+ Generate rich statistics for object detection datasets on Hugging Face Hub.
13
+
14
+ Mirrors panlabel's stats command. Computes:
15
+
16
+ - Summary counts (images, annotations, categories)
17
+ - Label distribution histogram (top-N)
18
+ - Bounding box statistics (area, aspect ratio, out-of-bounds)
19
+ - Annotation density per image
20
+ - Per-category bbox statistics
21
+ - Category co-occurrence pairs
22
+ - Image resolution distribution
23
+
24
+ Supports COCO-style (xywh), XYXY/VOC, YOLO (normalized center xywh),
25
+ TFOD (normalized xyxy), and Label Studio (percentage xywh) bbox formats.
26
+ Supports streaming for large datasets. Outputs text or JSON.
27
+
28
+ Examples:
29
+ uv run stats-hf-dataset.py merve/test-coco-dataset
30
+ uv run stats-hf-dataset.py merve/test-coco-dataset --top 20 --report json
31
+ uv run stats-hf-dataset.py merve/test-coco-dataset --bbox-format tfod
32
+ uv run stats-hf-dataset.py merve/test-coco-dataset --streaming --max-samples 5000
33
+ """
34
+
35
+ import argparse
36
+ import json
37
+ import logging
38
+ import math
39
+ import os
40
+ import sys
41
+ import time
42
+ from collections import Counter, defaultdict
43
+ from datetime import datetime
44
+ from typing import Any
45
+
46
+ from datasets import load_dataset
47
+ from huggingface_hub import DatasetCard, login
48
+ from tqdm.auto import tqdm
49
+
50
+ logging.basicConfig(level=logging.INFO)
51
+ logger = logging.getLogger(__name__)
52
+
53
+ BBOX_FORMATS = ["coco_xywh", "xyxy", "voc", "yolo", "tfod", "label_studio"]
54
+
55
+
56
+ def to_xyxy(bbox: list[float], fmt: str, img_w: float = 1.0, img_h: float = 1.0) -> tuple[float, float, float, float]:
57
+ """Convert any bbox format to (xmin, ymin, xmax, ymax) in pixel space."""
58
+ if fmt == "coco_xywh":
59
+ x, y, w, h = bbox
60
+ return (x, y, x + w, y + h)
61
+ elif fmt in ("xyxy", "voc"):
62
+ return tuple(bbox[:4])
63
+ elif fmt == "yolo":
64
+ cx, cy, w, h = bbox
65
+ return (cx - w / 2) * img_w, (cy - h / 2) * img_h, (cx + w / 2) * img_w, (cy + h / 2) * img_h
66
+ elif fmt == "tfod":
67
+ xmin_n, ymin_n, xmax_n, ymax_n = bbox
68
+ return (xmin_n * img_w, ymin_n * img_h, xmax_n * img_w, ymax_n * img_h)
69
+ elif fmt == "label_studio":
70
+ x_pct, y_pct, w_pct, h_pct = bbox
71
+ return (
72
+ x_pct / 100.0 * img_w,
73
+ y_pct / 100.0 * img_h,
74
+ (x_pct + w_pct) / 100.0 * img_w,
75
+ (y_pct + h_pct) / 100.0 * img_h,
76
+ )
77
+ else:
78
+ raise ValueError(f"Unknown bbox format: {fmt}")
79
+
80
+
81
+ def percentile(sorted_vals: list[float], p: float) -> float:
82
+ """Compute percentile from sorted values."""
83
+ if not sorted_vals:
84
+ return 0.0
85
+ k = (len(sorted_vals) - 1) * p / 100.0
86
+ f = int(k)
87
+ c = f + 1
88
+ if c >= len(sorted_vals):
89
+ return sorted_vals[-1]
90
+ return sorted_vals[f] + (k - f) * (sorted_vals[c] - sorted_vals[f])
91
+
92
+
93
+ def main(
94
+ input_dataset: str,
95
+ bbox_column: str = "bbox",
96
+ category_column: str = "category",
97
+ bbox_format: str = "coco_xywh",
98
+ width_column: str | None = "width",
99
+ height_column: str | None = "height",
100
+ split: str = "train",
101
+ max_samples: int | None = None,
102
+ streaming: bool = False,
103
+ top: int = 10,
104
+ report_format: str = "text",
105
+ tolerance: float = 0.5,
106
+ hf_token: str | None = None,
107
+ output_dataset: str | None = None,
108
+ private: bool = False,
109
+ ):
110
+ """Compute statistics for an object detection dataset."""
111
+
112
+ start_time = datetime.now()
113
+
114
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
115
+ if HF_TOKEN:
116
+ login(token=HF_TOKEN)
117
+
118
+ logger.info(f"Loading dataset: {input_dataset} (split={split}, streaming={streaming})")
119
+ dataset = load_dataset(input_dataset, split=split, streaming=streaming)
120
+
121
+ # Accumulators
122
+ total_images = 0
123
+ total_annotations = 0
124
+ category_counts = Counter()
125
+ annotations_per_image = []
126
+ areas = []
127
+ aspect_ratios = []
128
+ widths = []
129
+ heights = []
130
+ out_of_bounds_count = 0
131
+ zero_area_count = 0
132
+ per_category_areas = defaultdict(list)
133
+ co_occurrence_pairs = Counter()
134
+ images_without_annotations = 0
135
+
136
+ iterable = dataset
137
+ if max_samples:
138
+ if streaming:
139
+ iterable = dataset.take(max_samples)
140
+ else:
141
+ iterable = dataset.select(range(min(max_samples, len(dataset))))
142
+
143
+ for idx, example in enumerate(tqdm(iterable, desc="Computing stats", total=max_samples)):
144
+ total_images += 1
145
+
146
+ objects = example.get("objects", example)
147
+ bboxes = objects.get(bbox_column, []) or []
148
+ categories = objects.get(category_column, []) or []
149
+
150
+ # Image dimensions
151
+ img_w = None
152
+ img_h = None
153
+ if width_column:
154
+ img_w = example.get(width_column) or (objects.get(width_column) if isinstance(objects, dict) else None)
155
+ if height_column:
156
+ img_h = example.get(height_column) or (objects.get(height_column) if isinstance(objects, dict) else None)
157
+
158
+ if img_w is not None and img_h is not None:
159
+ widths.append(img_w)
160
+ heights.append(img_h)
161
+
162
+ num_anns = len(bboxes)
163
+ annotations_per_image.append(num_anns)
164
+ total_annotations += num_anns
165
+
166
+ if num_anns == 0:
167
+ images_without_annotations += 1
168
+ continue
169
+
170
+ # Track categories and co-occurrences
171
+ image_cats = set()
172
+ for ann_idx, bbox in enumerate(bboxes):
173
+ cat = categories[ann_idx] if ann_idx < len(categories) else None
174
+ cat_str = str(cat) if cat is not None else "<unknown>"
175
+ category_counts[cat_str] += 1
176
+ image_cats.add(cat_str)
177
+
178
+ if bbox is None or len(bbox) < 4:
179
+ continue
180
+ if not all(math.isfinite(v) for v in bbox[:4]):
181
+ continue
182
+
183
+ w_for_conv = img_w if img_w else 1.0
184
+ h_for_conv = img_h if img_h else 1.0
185
+ xmin, ymin, xmax, ymax = to_xyxy(bbox[:4], bbox_format, w_for_conv, h_for_conv)
186
+
187
+ bw = xmax - xmin
188
+ bh = ymax - ymin
189
+ area = bw * bh
190
+
191
+ if area <= 0:
192
+ zero_area_count += 1
193
+ else:
194
+ areas.append(area)
195
+ per_category_areas[cat_str].append(area)
196
+
197
+ if bh > 0:
198
+ aspect_ratios.append(bw / bh)
199
+
200
+ # Out of bounds check
201
+ if img_w is not None and img_h is not None:
202
+ if xmin < -tolerance or ymin < -tolerance or xmax > img_w + tolerance or ymax > img_h + tolerance:
203
+ out_of_bounds_count += 1
204
+
205
+ # Co-occurrence pairs
206
+ sorted_cats = sorted(image_cats)
207
+ for i in range(len(sorted_cats)):
208
+ for j in range(i + 1, len(sorted_cats)):
209
+ co_occurrence_pairs[(sorted_cats[i], sorted_cats[j])] += 1
210
+
211
+ processing_time = datetime.now() - start_time
212
+
213
+ # Compute distribution stats
214
+ areas.sort()
215
+ aspect_ratios.sort()
216
+ annotations_per_image.sort()
217
+
218
+ def dist_stats(vals: list[float]) -> dict:
219
+ if not vals:
220
+ return {"count": 0, "min": 0, "max": 0, "mean": 0, "median": 0, "p25": 0, "p75": 0}
221
+ return {
222
+ "count": len(vals),
223
+ "min": round(vals[0], 2),
224
+ "max": round(vals[-1], 2),
225
+ "mean": round(sum(vals) / len(vals), 2),
226
+ "median": round(percentile(vals, 50), 2),
227
+ "p25": round(percentile(vals, 25), 2),
228
+ "p75": round(percentile(vals, 75), 2),
229
+ }
230
+
231
+ # Top-N categories
232
+ top_categories = category_counts.most_common(top)
233
+
234
+ # Top co-occurrence pairs
235
+ top_cooccurrences = co_occurrence_pairs.most_common(top)
236
+
237
+ # Per-category bbox area stats
238
+ per_cat_stats = {}
239
+ for cat, cat_areas in sorted(per_category_areas.items(), key=lambda x: -len(x[1])):
240
+ cat_areas.sort()
241
+ per_cat_stats[cat] = dist_stats(cat_areas)
242
+
243
+ report = {
244
+ "dataset": input_dataset,
245
+ "split": split,
246
+ "summary": {
247
+ "total_images": total_images,
248
+ "total_annotations": total_annotations,
249
+ "unique_categories": len(category_counts),
250
+ "images_without_annotations": images_without_annotations,
251
+ "out_of_bounds_bboxes": out_of_bounds_count,
252
+ "zero_area_bboxes": zero_area_count,
253
+ },
254
+ "label_distribution": {cat: count for cat, count in top_categories},
255
+ "annotation_density": dist_stats([float(x) for x in annotations_per_image]),
256
+ "bbox_area": dist_stats(areas),
257
+ "bbox_aspect_ratio": dist_stats(aspect_ratios),
258
+ "image_resolution": {
259
+ "width": dist_stats([float(w) for w in sorted(widths)]) if widths else {},
260
+ "height": dist_stats([float(h) for h in sorted(heights)]) if heights else {},
261
+ },
262
+ "per_category_area": {cat: per_cat_stats[cat] for cat in list(per_cat_stats)[:top]},
263
+ "co_occurrence_pairs": [
264
+ {"pair": list(pair), "count": count} for pair, count in top_cooccurrences
265
+ ],
266
+ "processing_time_seconds": processing_time.total_seconds(),
267
+ "timestamp": datetime.now().isoformat(),
268
+ }
269
+
270
+ if report_format == "json":
271
+ print(json.dumps(report, indent=2))
272
+ else:
273
+ print("\n" + "=" * 60)
274
+ print(f"Dataset Statistics: {input_dataset}")
275
+ print("=" * 60)
276
+
277
+ s = report["summary"]
278
+ print(f"\n Images: {s['total_images']:,}")
279
+ print(f" Annotations: {s['total_annotations']:,}")
280
+ print(f" Categories: {s['unique_categories']:,}")
281
+ print(f" Empty images: {s['images_without_annotations']:,}")
282
+ print(f" Out-of-bounds: {s['out_of_bounds_bboxes']:,}")
283
+ print(f" Zero-area bboxes: {s['zero_area_bboxes']:,}")
284
+
285
+ if total_images > 0:
286
+ print(f"\n Annotations/image: {total_annotations / total_images:.1f} avg")
287
+
288
+ d = report["annotation_density"]
289
+ if d["count"]:
290
+ print(f" min={d['min']}, median={d['median']}, max={d['max']}")
291
+
292
+ print(f"\n Label Distribution (top {top}):")
293
+ for cat, count in top_categories:
294
+ pct = 100.0 * count / total_annotations if total_annotations else 0
295
+ bar = "#" * int(pct / 2)
296
+ print(f" {cat:30s} {count:>8,} ({pct:5.1f}%) {bar}")
297
+
298
+ a = report["bbox_area"]
299
+ if a["count"]:
300
+ print(f"\n Bbox Area:")
301
+ print(f" min={a['min']}, median={a['median']}, mean={a['mean']}, max={a['max']}")
302
+
303
+ ar = report["bbox_aspect_ratio"]
304
+ if ar["count"]:
305
+ print(f"\n Bbox Aspect Ratio (w/h):")
306
+ print(f" min={ar['min']}, median={ar['median']}, mean={ar['mean']}, max={ar['max']}")
307
+
308
+ if top_cooccurrences:
309
+ print(f"\n Category Co-occurrence (top {top}):")
310
+ for pair, count in top_cooccurrences:
311
+ print(f" {pair[0]} + {pair[1]}: {count:,}")
312
+
313
+ print(f"\n Processing time: {processing_time.total_seconds():.1f}s")
314
+ print("=" * 60)
315
+
316
+ # Optionally push stats report as a dataset
317
+ if output_dataset:
318
+ from datasets import Dataset as HFDataset
319
+
320
+ report_ds = HFDataset.from_dict({
321
+ "report_json": [json.dumps(report)],
322
+ "dataset": [input_dataset],
323
+ "total_images": [total_images],
324
+ "total_annotations": [total_annotations],
325
+ "unique_categories": [len(category_counts)],
326
+ "timestamp": [datetime.now().isoformat()],
327
+ })
328
+
329
+ logger.info(f"Pushing stats report to {output_dataset}")
330
+ max_retries = 3
331
+ for attempt in range(1, max_retries + 1):
332
+ try:
333
+ if attempt > 1:
334
+ os.environ["HF_HUB_DISABLE_XET"] = "1"
335
+ report_ds.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
336
+ break
337
+ except Exception as e:
338
+ logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
339
+ if attempt < max_retries:
340
+ time.sleep(30 * (2 ** (attempt - 1)))
341
+ else:
342
+ logger.error("All upload attempts failed.")
343
+ sys.exit(1)
344
+
345
+ logger.info(f"Stats pushed to: https://huggingface.co/datasets/{output_dataset}")
346
+
347
+
348
+ if __name__ == "__main__":
349
+ parser = argparse.ArgumentParser(
350
+ description="Generate statistics for object detection datasets on HF Hub",
351
+ formatter_class=argparse.RawDescriptionHelpFormatter,
352
+ epilog="""
353
+ Bbox formats:
354
+ coco_xywh [x, y, width, height] in pixels (default)
355
+ xyxy [xmin, ymin, xmax, ymax] in pixels
356
+ voc [xmin, ymin, xmax, ymax] in pixels (alias for xyxy)
357
+ yolo [cx, cy, w, h] normalized 0-1
358
+ tfod [xmin, ymin, xmax, ymax] normalized 0-1
359
+ label_studio [x, y, w, h] percentage 0-100
360
+
361
+ Examples:
362
+ uv run stats-hf-dataset.py merve/coco-dataset
363
+ uv run stats-hf-dataset.py merve/coco-dataset --top 20 --report json
364
+ uv run stats-hf-dataset.py merve/coco-dataset --streaming --max-samples 5000
365
+ """,
366
+ )
367
+
368
+ parser.add_argument("input_dataset", help="Input dataset ID on HF Hub")
369
+ parser.add_argument("--bbox-column", default="bbox", help="Column containing bboxes (default: bbox)")
370
+ parser.add_argument("--category-column", default="category", help="Column containing categories (default: category)")
371
+ parser.add_argument("--bbox-format", choices=BBOX_FORMATS, default="coco_xywh", help="Bbox format (default: coco_xywh)")
372
+ parser.add_argument("--width-column", default="width", help="Column for image width (default: width)")
373
+ parser.add_argument("--height-column", default="height", help="Column for image height (default: height)")
374
+ parser.add_argument("--split", default="train", help="Dataset split (default: train)")
375
+ parser.add_argument("--max-samples", type=int, help="Max samples to process")
376
+ parser.add_argument("--streaming", action="store_true", help="Use streaming mode")
377
+ parser.add_argument("--top", type=int, default=10, help="Top-N items for histograms (default: 10)")
378
+ parser.add_argument("--report", choices=["text", "json"], default="text", help="Report format (default: text)")
379
+ parser.add_argument("--tolerance", type=float, default=0.5, help="Out-of-bounds tolerance in pixels (default: 0.5)")
380
+ parser.add_argument("--hf-token", help="HF API token")
381
+ parser.add_argument("--output-dataset", help="Push stats report to this HF dataset")
382
+ parser.add_argument("--private", action="store_true", help="Make output dataset private")
383
+
384
+ args = parser.parse_args()
385
+
386
+ main(
387
+ input_dataset=args.input_dataset,
388
+ bbox_column=args.bbox_column,
389
+ category_column=args.category_column,
390
+ bbox_format=args.bbox_format,
391
+ width_column=args.width_column,
392
+ height_column=args.height_column,
393
+ split=args.split,
394
+ max_samples=args.max_samples,
395
+ streaming=args.streaming,
396
+ top=args.top,
397
+ report_format=args.report,
398
+ tolerance=args.tolerance,
399
+ hf_token=args.hf_token,
400
+ output_dataset=args.output_dataset,
401
+ private=args.private,
402
+ )
validate-hf-dataset.py ADDED
@@ -0,0 +1,455 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets>=3.1.0",
5
+ # "huggingface-hub",
6
+ # "tqdm",
7
+ # "Pillow",
8
+ # ]
9
+ # ///
10
+
11
+ """
12
+ Validate object detection annotations in a Hugging Face dataset.
13
+
14
+ Streams a HF dataset and checks for common annotation issues, mirroring
15
+ panlabel's validate command. Checks include:
16
+
17
+ - Duplicate image file names
18
+ - Missing or empty bounding boxes
19
+ - Bounding box ordering (xmin <= xmax, ymin <= ymax)
20
+ - Bounding boxes out of image bounds
21
+ - Non-finite coordinates (NaN/Inf)
22
+ - Zero-area bounding boxes
23
+ - Empty or missing category labels
24
+ - Category ID consistency
25
+
26
+ Supports COCO-style (xywh), XYXY/VOC, YOLO (normalized center xywh),
27
+ TFOD (normalized xyxy), and Label Studio (percentage xywh) bbox formats.
28
+ Outputs a validation report as text or JSON.
29
+
30
+ Examples:
31
+ uv run validate-hf-dataset.py merve/test-coco-dataset
32
+ uv run validate-hf-dataset.py merve/test-coco-dataset --bbox-format xyxy --strict
33
+ uv run validate-hf-dataset.py merve/test-coco-dataset --bbox-format tfod --report json
34
+ uv run validate-hf-dataset.py merve/test-coco-dataset --report json --max-samples 1000
35
+ """
36
+
37
+ import argparse
38
+ import json
39
+ import logging
40
+ import math
41
+ import os
42
+ import sys
43
+ import time
44
+ from collections import Counter, defaultdict
45
+ from datetime import datetime
46
+ from typing import Any
47
+
48
+ from datasets import load_dataset
49
+ from huggingface_hub import DatasetCard, login
50
+ from tqdm.auto import tqdm
51
+
52
+ logging.basicConfig(level=logging.INFO)
53
+ logger = logging.getLogger(__name__)
54
+
55
+ BBOX_FORMATS = ["coco_xywh", "xyxy", "voc", "yolo", "tfod", "label_studio"]
56
+
57
+
58
+ def to_xyxy(bbox: list[float], fmt: str, img_w: float = 1.0, img_h: float = 1.0) -> tuple[float, float, float, float]:
59
+ """Convert any bbox format to (xmin, ymin, xmax, ymax) in pixel space."""
60
+ if fmt == "coco_xywh":
61
+ x, y, w, h = bbox
62
+ return (x, y, x + w, y + h)
63
+ elif fmt in ("xyxy", "voc"):
64
+ return tuple(bbox[:4])
65
+ elif fmt == "yolo":
66
+ cx, cy, w, h = bbox
67
+ xmin = (cx - w / 2) * img_w
68
+ ymin = (cy - h / 2) * img_h
69
+ xmax = (cx + w / 2) * img_w
70
+ ymax = (cy + h / 2) * img_h
71
+ return (xmin, ymin, xmax, ymax)
72
+ elif fmt == "tfod":
73
+ xmin_n, ymin_n, xmax_n, ymax_n = bbox
74
+ return (xmin_n * img_w, ymin_n * img_h, xmax_n * img_w, ymax_n * img_h)
75
+ elif fmt == "label_studio":
76
+ x_pct, y_pct, w_pct, h_pct = bbox
77
+ return (
78
+ x_pct / 100.0 * img_w,
79
+ y_pct / 100.0 * img_h,
80
+ (x_pct + w_pct) / 100.0 * img_w,
81
+ (y_pct + h_pct) / 100.0 * img_h,
82
+ )
83
+ else:
84
+ raise ValueError(f"Unknown bbox format: {fmt}")
85
+
86
+
87
+ def is_finite(val: float) -> bool:
88
+ return not (math.isnan(val) or math.isinf(val))
89
+
90
+
91
+ def validate_example(
92
+ example: dict[str, Any],
93
+ idx: int,
94
+ bbox_column: str,
95
+ category_column: str,
96
+ bbox_format: str,
97
+ image_column: str,
98
+ width_column: str | None,
99
+ height_column: str | None,
100
+ tolerance: float = 0.5,
101
+ ) -> list[dict]:
102
+ """Validate a single example. Returns a list of issue dicts."""
103
+ issues = []
104
+
105
+ def add_issue(level: str, code: str, message: str, ann_idx: int | None = None):
106
+ issue = {"level": level, "code": code, "message": message, "example_idx": idx}
107
+ if ann_idx is not None:
108
+ issue["annotation_idx"] = ann_idx
109
+ issues.append(issue)
110
+
111
+ # Get objects container — handle nested dict (objects column) or flat lists
112
+ objects = example.get("objects", example)
113
+ bboxes = objects.get(bbox_column, [])
114
+ categories = objects.get(category_column, [])
115
+
116
+ if bboxes is None:
117
+ bboxes = []
118
+ if categories is None:
119
+ categories = []
120
+
121
+ # Image dimensions (if available)
122
+ img_w = None
123
+ img_h = None
124
+ if width_column and width_column in example:
125
+ img_w = example[width_column]
126
+ elif width_column and objects and width_column in objects:
127
+ img_w = objects[width_column]
128
+ if height_column and height_column in example:
129
+ img_h = example[height_column]
130
+ elif height_column and objects and height_column in objects:
131
+ img_h = objects[height_column]
132
+
133
+ if not bboxes and not categories:
134
+ add_issue("warning", "W001", "No annotations found in this example")
135
+ return issues
136
+
137
+ if len(bboxes) != len(categories):
138
+ add_issue(
139
+ "error",
140
+ "E001",
141
+ f"Bbox count ({len(bboxes)}) != category count ({len(categories)})",
142
+ )
143
+
144
+ for ann_idx, bbox in enumerate(bboxes):
145
+ if bbox is None or len(bbox) < 4:
146
+ add_issue("error", "E002", f"Invalid bbox (need 4 values, got {bbox})", ann_idx)
147
+ continue
148
+
149
+ # Check finite
150
+ if not all(is_finite(v) for v in bbox[:4]):
151
+ add_issue("error", "E003", f"Non-finite bbox coordinates: {bbox}", ann_idx)
152
+ continue
153
+
154
+ # Convert to xyxy
155
+ w_for_conv = img_w if img_w else 1.0
156
+ h_for_conv = img_h if img_h else 1.0
157
+ xmin, ymin, xmax, ymax = to_xyxy(bbox[:4], bbox_format, w_for_conv, h_for_conv)
158
+
159
+ # Check ordering
160
+ if xmin > xmax:
161
+ add_issue("error", "E004", f"xmin ({xmin}) > xmax ({xmax})", ann_idx)
162
+ if ymin > ymax:
163
+ add_issue("error", "E005", f"ymin ({ymin}) > ymax ({ymax})", ann_idx)
164
+
165
+ # Check zero area
166
+ area = (xmax - xmin) * (ymax - ymin)
167
+ if area <= 0:
168
+ add_issue("warning", "W002", f"Zero or negative area bbox: {bbox}", ann_idx)
169
+
170
+ # Check bounds (only if image dimensions available)
171
+ if img_w is not None and img_h is not None:
172
+ if xmin < -tolerance or ymin < -tolerance:
173
+ add_issue(
174
+ "warning",
175
+ "W003",
176
+ f"Bbox extends before image origin: ({xmin}, {ymin})",
177
+ ann_idx,
178
+ )
179
+ if xmax > img_w + tolerance or ymax > img_h + tolerance:
180
+ add_issue(
181
+ "warning",
182
+ "W004",
183
+ f"Bbox extends beyond image bounds: ({xmax}, {ymax}) > ({img_w}, {img_h})",
184
+ ann_idx,
185
+ )
186
+
187
+ # Check categories
188
+ for ann_idx, cat in enumerate(categories):
189
+ if cat is None or (isinstance(cat, str) and cat.strip() == ""):
190
+ add_issue("warning", "W005", "Empty category label", ann_idx)
191
+
192
+ return issues
193
+
194
+
195
+ def main(
196
+ input_dataset: str,
197
+ bbox_column: str = "bbox",
198
+ category_column: str = "category",
199
+ bbox_format: str = "coco_xywh",
200
+ image_column: str = "image",
201
+ width_column: str | None = "width",
202
+ height_column: str | None = "height",
203
+ split: str = "train",
204
+ max_samples: int | None = None,
205
+ streaming: bool = False,
206
+ strict: bool = False,
207
+ report_format: str = "text",
208
+ tolerance: float = 0.5,
209
+ hf_token: str | None = None,
210
+ output_dataset: str | None = None,
211
+ private: bool = False,
212
+ ):
213
+ """Validate an object detection dataset from HF Hub."""
214
+
215
+ start_time = datetime.now()
216
+
217
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
218
+ if HF_TOKEN:
219
+ login(token=HF_TOKEN)
220
+
221
+ logger.info(f"Loading dataset: {input_dataset} (split={split}, streaming={streaming})")
222
+ dataset = load_dataset(input_dataset, split=split, streaming=streaming)
223
+
224
+ all_issues = []
225
+ file_names = []
226
+ total_annotations = 0
227
+ total_examples = 0
228
+ category_counts = Counter()
229
+ error_count = 0
230
+ warning_count = 0
231
+
232
+ iterable = dataset
233
+ if max_samples:
234
+ if streaming:
235
+ iterable = dataset.take(max_samples)
236
+ else:
237
+ iterable = dataset.select(range(min(max_samples, len(dataset))))
238
+
239
+ for idx, example in enumerate(tqdm(iterable, desc="Validating", total=max_samples)):
240
+ total_examples += 1
241
+
242
+ issues = validate_example(
243
+ example=example,
244
+ idx=idx,
245
+ bbox_column=bbox_column,
246
+ category_column=category_column,
247
+ bbox_format=bbox_format,
248
+ image_column=image_column,
249
+ width_column=width_column,
250
+ height_column=height_column,
251
+ tolerance=tolerance,
252
+ )
253
+ all_issues.extend(issues)
254
+
255
+ # Count stats
256
+ objects = example.get("objects", example)
257
+ bboxes = objects.get(bbox_column, []) or []
258
+ categories = objects.get(category_column, []) or []
259
+ total_annotations += len(bboxes)
260
+ for cat in categories:
261
+ if cat is not None:
262
+ category_counts[str(cat)] += 1
263
+
264
+ # Track file names for duplicate check
265
+ fname = example.get("file_name") or example.get("image_id") or str(idx)
266
+ file_names.append(fname)
267
+
268
+ # Check duplicate file names
269
+ fname_counts = Counter(file_names)
270
+ duplicates = {k: v for k, v in fname_counts.items() if v > 1}
271
+ for fname, count in duplicates.items():
272
+ all_issues.append({
273
+ "level": "warning",
274
+ "code": "W006",
275
+ "message": f"Duplicate file name '{fname}' appears {count} times",
276
+ "example_idx": None,
277
+ })
278
+
279
+ for issue in all_issues:
280
+ if issue["level"] == "error":
281
+ error_count += 1
282
+ else:
283
+ warning_count += 1
284
+
285
+ processing_time = datetime.now() - start_time
286
+
287
+ # Build report
288
+ report = {
289
+ "dataset": input_dataset,
290
+ "split": split,
291
+ "total_examples": total_examples,
292
+ "total_annotations": total_annotations,
293
+ "unique_categories": len(category_counts),
294
+ "errors": error_count,
295
+ "warnings": warning_count,
296
+ "duplicate_filenames": len(duplicates),
297
+ "issues": all_issues,
298
+ "processing_time_seconds": processing_time.total_seconds(),
299
+ "timestamp": datetime.now().isoformat(),
300
+ "valid": error_count == 0 and (not strict or warning_count == 0),
301
+ }
302
+
303
+ if report_format == "json":
304
+ print(json.dumps(report, indent=2))
305
+ else:
306
+ print("\n" + "=" * 60)
307
+ print(f"Validation Report: {input_dataset}")
308
+ print("=" * 60)
309
+ print(f" Examples: {total_examples:,}")
310
+ print(f" Annotations: {total_annotations:,}")
311
+ print(f" Categories: {len(category_counts):,}")
312
+ print(f" Errors: {error_count}")
313
+ print(f" Warnings: {warning_count}")
314
+ if duplicates:
315
+ print(f" Duplicate IDs: {len(duplicates)}")
316
+ print(f" Processing: {processing_time.total_seconds():.1f}s")
317
+ print()
318
+
319
+ if all_issues:
320
+ print("Issues:")
321
+ # Group by code
322
+ by_code = defaultdict(list)
323
+ for issue in all_issues:
324
+ by_code[issue["code"]].append(issue)
325
+
326
+ for code in sorted(by_code.keys()):
327
+ code_issues = by_code[code]
328
+ level = code_issues[0]["level"].upper()
329
+ sample = code_issues[0]["message"]
330
+ print(f" [{level}] {code}: {sample}")
331
+ if len(code_issues) > 1:
332
+ print(f" ... and {len(code_issues) - 1} more")
333
+ print()
334
+
335
+ status = "VALID" if report["valid"] else "INVALID"
336
+ mode = " (strict)" if strict else ""
337
+ print(f"Result: {status}{mode}")
338
+ print("=" * 60)
339
+
340
+ # Optionally push validation report as a dataset
341
+ if output_dataset:
342
+ from datasets import Dataset as HFDataset
343
+
344
+ report_ds = HFDataset.from_dict({
345
+ "report": [json.dumps(report)],
346
+ "dataset": [input_dataset],
347
+ "valid": [report["valid"]],
348
+ "errors": [error_count],
349
+ "warnings": [warning_count],
350
+ "total_examples": [total_examples],
351
+ "total_annotations": [total_annotations],
352
+ "timestamp": [datetime.now().isoformat()],
353
+ })
354
+
355
+ logger.info(f"Pushing validation report to {output_dataset}")
356
+ max_retries = 3
357
+ for attempt in range(1, max_retries + 1):
358
+ try:
359
+ if attempt > 1:
360
+ os.environ["HF_HUB_DISABLE_XET"] = "1"
361
+ report_ds.push_to_hub(
362
+ output_dataset,
363
+ private=private,
364
+ token=HF_TOKEN,
365
+ )
366
+ break
367
+ except Exception as e:
368
+ logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
369
+ if attempt < max_retries:
370
+ time.sleep(30 * (2 ** (attempt - 1)))
371
+ else:
372
+ logger.error("All upload attempts failed.")
373
+ sys.exit(1)
374
+
375
+ logger.info(f"Report pushed to: https://huggingface.co/datasets/{output_dataset}")
376
+
377
+ if not report["valid"]:
378
+ sys.exit(1 if strict else 0)
379
+
380
+
381
+ if __name__ == "__main__":
382
+ parser = argparse.ArgumentParser(
383
+ description="Validate object detection annotations in a HF dataset",
384
+ formatter_class=argparse.RawDescriptionHelpFormatter,
385
+ epilog="""
386
+ Bbox formats:
387
+ coco_xywh [x, y, width, height] in pixels (default)
388
+ xyxy [xmin, ymin, xmax, ymax] in pixels
389
+ voc [xmin, ymin, xmax, ymax] in pixels (alias for xyxy)
390
+ yolo [cx, cy, w, h] normalized 0-1
391
+ tfod [xmin, ymin, xmax, ymax] normalized 0-1
392
+ label_studio [x, y, w, h] percentage 0-100
393
+
394
+ Issue codes:
395
+ E001 Bbox/category count mismatch
396
+ E002 Invalid bbox (missing values)
397
+ E003 Non-finite coordinates (NaN/Inf)
398
+ E004 xmin > xmax
399
+ E005 ymin > ymax
400
+ W001 No annotations in example
401
+ W002 Zero or negative area
402
+ W003 Bbox before image origin
403
+ W004 Bbox beyond image bounds
404
+ W005 Empty category label
405
+ W006 Duplicate file name
406
+
407
+ Examples:
408
+ uv run validate-hf-dataset.py merve/coco-dataset
409
+ uv run validate-hf-dataset.py merve/coco-dataset --bbox-format xyxy --strict
410
+ uv run validate-hf-dataset.py merve/coco-dataset --streaming --max-samples 500
411
+ """,
412
+ )
413
+
414
+ parser.add_argument("input_dataset", help="Input dataset ID on HF Hub")
415
+ parser.add_argument("--bbox-column", default="bbox", help="Column containing bboxes (default: bbox)")
416
+ parser.add_argument("--category-column", default="category", help="Column containing categories (default: category)")
417
+ parser.add_argument(
418
+ "--bbox-format",
419
+ choices=BBOX_FORMATS,
420
+ default="coco_xywh",
421
+ help="Bounding box format (default: coco_xywh)",
422
+ )
423
+ parser.add_argument("--image-column", default="image", help="Column containing images (default: image)")
424
+ parser.add_argument("--width-column", default="width", help="Column for image width (default: width)")
425
+ parser.add_argument("--height-column", default="height", help="Column for image height (default: height)")
426
+ parser.add_argument("--split", default="train", help="Dataset split (default: train)")
427
+ parser.add_argument("--max-samples", type=int, help="Max samples to validate")
428
+ parser.add_argument("--streaming", action="store_true", help="Use streaming mode (no full download)")
429
+ parser.add_argument("--strict", action="store_true", help="Treat warnings as errors")
430
+ parser.add_argument("--report", choices=["text", "json"], default="text", help="Report format (default: text)")
431
+ parser.add_argument("--tolerance", type=float, default=0.5, help="Out-of-bounds tolerance in pixels (default: 0.5)")
432
+ parser.add_argument("--hf-token", help="HF API token")
433
+ parser.add_argument("--output-dataset", help="Push validation report to this HF dataset")
434
+ parser.add_argument("--private", action="store_true", help="Make output dataset private")
435
+
436
+ args = parser.parse_args()
437
+
438
+ main(
439
+ input_dataset=args.input_dataset,
440
+ bbox_column=args.bbox_column,
441
+ category_column=args.category_column,
442
+ bbox_format=args.bbox_format,
443
+ image_column=args.image_column,
444
+ width_column=args.width_column,
445
+ height_column=args.height_column,
446
+ split=args.split,
447
+ max_samples=args.max_samples,
448
+ streaming=args.streaming,
449
+ strict=args.strict,
450
+ report_format=args.report,
451
+ tolerance=args.tolerance,
452
+ hf_token=args.hf_token,
453
+ output_dataset=args.output_dataset,
454
+ private=args.private,
455
+ )