Datasets:
Formats:
parquet
Languages:
English
Size:
10K - 100K
Tags:
document-detection
corner-detection
perspective-correction
document-scanner
keypoint-regression
License:
File size: 12,722 Bytes
2f6d2d1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 |
---
dataset_info:
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: is_negative
dtype: bool
- name: corner_tl_x
dtype: float32
- name: corner_tl_y
dtype: float32
- name: corner_tr_x
dtype: float32
- name: corner_tr_y
dtype: float32
- name: corner_br_x
dtype: float32
- name: corner_br_y
dtype: float32
- name: corner_bl_x
dtype: float32
- name: corner_bl_y
dtype: float32
splits:
- name: train
num_examples: 32968
- name: validation
num_examples: 8645
- name: test
num_examples: 6652
configs:
- config_name: default
data_files:
- split: train
path: train/*.parquet
- split: validation
path: val/*.parquet
- split: test
path: test/*.parquet
license: other
task_categories:
- image-segmentation
- keypoint-detection
- object-detection
tags:
- document-detection
- corner-detection
- perspective-correction
- document-scanner
- keypoint-regression
language:
- en
size_categories:
- 10K<n<100K
---
# DocCornerDataset
A high-quality document corner detection dataset for training models to detect the four corners of documents in images. This dataset is optimized for building robust document scanning and perspective correction applications.
## Dataset Examples
### Training Set
<img src="collages/train_collage.jpg" alt="Training samples" width="600"/>
### Validation Set
<img src="collages/val_collage.jpg" alt="Validation samples" width="600"/>
### Test Set
<img src="collages/test_collage.jpg" alt="Test samples" width="600"/>
*Green polygons show the annotated document corners*
## Dataset Description
This dataset contains images with document corner annotations, optimized for training robust document detection models. It uses the best-performing splits from an iterative dataset cleaning process with multiple quality validation steps.
### Key Features
- **High Quality Annotations**: Labels refined through iterative cleaning with multiple teacher models
- **Diverse Document Types**: IDs, invoices, receipts, books, cards, and general documents
- **Negative Samples**: Includes images without documents for training robust classifiers
- **No Overlap**: Train, validation, and test splits are completely disjoint
## Dataset Statistics
| Split | Images | Description |
|-------|--------|-------------|
| `train` | 32,968 | Training set (cleaned iter3 + hard negatives) |
| `validation` | 8,645 | Validation set (cleaned iter3) |
| `test` | 6,652 | Held-out test set (no overlap with train/val) |
| **Total** | **48,265** | |
## Data Sources and Licenses
This dataset is compiled from multiple open-source datasets. **Please refer to the original dataset licenses before using this data.**
### MIDV Dataset (ID Cards)
Mobile Identity Document Video dataset for identity document detection and recognition.
| Dataset | Images | License | Source |
|---------|--------|---------|--------|
| **MIDV-500** | ~9,400 | Research use | [Website](http://l3i-share.univ-lr.fr/MIDV500/) |
| **MIDV-2019** | ~1,350 | Research use | [Website](http://l3i-share.univ-lr.fr/MIDV2019/) |
**Citation:**
```bibtex
@article{arlazarov2019midv500,
title={MIDV-500: A Dataset for Identity Documents Analysis and Recognition on Mobile Devices in Video Stream},
author={Arlazarov, V.V. and Bulatov, K. and Chernov, T. and Arlazarov, V.L.},
journal={Computer Optics},
volume={43},
number={5},
pages={818--824},
year={2019}
}
@inproceedings{arlazarov2019midv2019,
title={MIDV-2019: Challenges of the modern mobile-based document OCR},
author={Arlazarov, V.V. and Bulatov, K. and Chernov, T. and Arlazarov, V.L.},
booktitle={ICDAR},
year={2019}
}
```
### SmartDoc Dataset (Documents)
SmartDoc Challenge dataset for document image acquisition and quality assessment.
| Dataset | Images | License | Source |
|---------|--------|---------|--------|
| **SmartDoc** | ~1,380 | Research use | [Website](https://smartdoc.univ-lr.fr/) |
**Citation:**
```bibtex
@inproceedings{burie2015smartdoc,
title={ICDAR 2015 Competition on Smartphone Document Capture and OCR (SmartDoc)},
author={Burie, J.C. and Chazalon, J. and Coustaty, M. and others},
booktitle={ICDAR},
year={2015}
}
```
### COCO Dataset (Negative Samples)
Common Objects in Context dataset used for negative samples (images without documents).
| Dataset | Images | License | Source |
|---------|--------|---------|--------|
| **COCO val2017** | ~4,300 | [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) | [Website](https://cocodataset.org/) |
| **COCO train2017** | ~11,400 | [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) | [Website](https://cocodataset.org/) |
**Note:** Excluded categories that could be confused with documents: book, laptop, tv, cell phone, keyboard, mouse, remote, clock.
**Citation:**
```bibtex
@inproceedings{lin2014coco,
title={Microsoft COCO: Common Objects in Context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and others},
booktitle={ECCV},
year={2014}
}
```
### Roboflow Universe (Various Documents)
Various document datasets from Roboflow Universe community.
| Category | Datasets | License | Source |
|----------|----------|---------|--------|
| **Documents** | document_segmentation_v2, doc_scanner, doc_rida, documento | Various (check individual) | [Roboflow Universe](https://universe.roboflow.com/) |
| **Bills/Invoices** | bill_segmentation, cs_invoice | Various (check individual) | [Roboflow Universe](https://universe.roboflow.com/) |
| **Receipts** | receipt_detection, receipt_occam, receipts_coolstuff | Various (check individual) | [Roboflow Universe](https://universe.roboflow.com/) |
| **ID Cards** | card_corner, card_4_class, id_card_skew, id_detections, idcard_jj | Various (check individual) | [Roboflow Universe](https://universe.roboflow.com/) |
| **Passports** | segment_passport | Various (check individual) | [Roboflow Universe](https://universe.roboflow.com/) |
| **Books** | book_reader, page_segmentation_tecgp, book_cmjt2 | Various (check individual) | [Roboflow Universe](https://universe.roboflow.com/) |
**Note:** Roboflow datasets have various licenses. Please check the individual dataset pages on [Roboflow Universe](https://universe.roboflow.com/) for specific license terms.
## Features
| Feature | Type | Description |
|---------|------|-------------|
| `image` | Image | The document image (JPEG) |
| `filename` | string | Original filename for traceability |
| `is_negative` | bool | `True` if image contains no document |
| `corner_tl_x` | float32 | Top-left corner X coordinate (normalized 0-1) |
| `corner_tl_y` | float32 | Top-left corner Y coordinate (normalized 0-1) |
| `corner_tr_x` | float32 | Top-right corner X coordinate (normalized 0-1) |
| `corner_tr_y` | float32 | Top-right corner Y coordinate (normalized 0-1) |
| `corner_br_x` | float32 | Bottom-right corner X coordinate (normalized 0-1) |
| `corner_br_y` | float32 | Bottom-right corner Y coordinate (normalized 0-1) |
| `corner_bl_x` | float32 | Bottom-left corner X coordinate (normalized 0-1) |
| `corner_bl_y` | float32 | Bottom-left corner Y coordinate (normalized 0-1) |
### Corner Order
Corners are ordered **clockwise** starting from top-left:
```
1 (TL) -------- 2 (TR)
| |
| Document |
| |
4 (BL) -------- 3 (BR)
```
### Coordinate System
- Coordinates are **normalized** to the range [0, 1]
- To convert to pixel coordinates: `pixel_x = corner_x * image_width`
- Origin (0, 0) is at the **top-left** of the image
### Negative Samples
Images with `is_negative=True`:
- Do not contain any document
- All corner coordinates are `null`
- Useful for training classifiers to reject non-document images
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load all splits
dataset = load_dataset("mapo80/DocCornerDataset")
# Access specific splits
train_data = dataset["train"]
val_data = dataset["validation"]
test_data = dataset["test"]
print(f"Train: {len(train_data)} samples")
print(f"Val: {len(val_data)} samples")
print(f"Test: {len(test_data)} samples")
```
### Iterating Over Samples
```python
for sample in dataset["train"]:
image = sample["image"] # PIL Image
filename = sample["filename"]
if not sample["is_negative"]:
# Get corner coordinates (normalized 0-1)
corners = [
(sample["corner_tl_x"], sample["corner_tl_y"]),
(sample["corner_tr_x"], sample["corner_tr_y"]),
(sample["corner_br_x"], sample["corner_br_y"]),
(sample["corner_bl_x"], sample["corner_bl_y"]),
]
# Convert to pixel coordinates
w, h = image.size
corners_px = [(int(x * w), int(y * h)) for x, y in corners]
```
### Visualizing Annotations
```python
from PIL import Image, ImageDraw
def draw_corners(image, corners, color=(0, 255, 0), width=3):
"""Draw document corners on image."""
draw = ImageDraw.Draw(image)
w, h = image.size
# Convert normalized to pixel coords
points = [(int(c[0] * w), int(c[1] * h)) for c in corners]
# Draw polygon
for i in range(4):
draw.line([points[i], points[(i+1) % 4]], fill=color, width=width)
# Draw corner circles
for p in points:
r = 5
draw.ellipse([p[0]-r, p[1]-r, p[0]+r, p[1]+r], fill=color)
return image
# Example usage
sample = dataset["train"][0]
if not sample["is_negative"]:
corners = [
(sample["corner_tl_x"], sample["corner_tl_y"]),
(sample["corner_tr_x"], sample["corner_tr_y"]),
(sample["corner_br_x"], sample["corner_br_y"]),
(sample["corner_bl_x"], sample["corner_bl_y"]),
]
annotated = draw_corners(sample["image"].copy(), corners)
annotated.show()
```
### Training a Model (PyTorch Example)
```python
import torch
from torch.utils.data import DataLoader
from datasets import load_dataset
dataset = load_dataset("mapo80/DocCornerDataset")
def collate_fn(batch):
images = torch.stack([transform(s["image"]) for s in batch])
# Stack corner coordinates (8 values per sample)
corners = []
for s in batch:
if s["is_negative"]:
corners.append(torch.zeros(8))
else:
corners.append(torch.tensor([
s["corner_tl_x"], s["corner_tl_y"],
s["corner_tr_x"], s["corner_tr_y"],
s["corner_br_x"], s["corner_br_y"],
s["corner_bl_x"], s["corner_bl_y"],
]))
return images, torch.stack(corners)
train_loader = DataLoader(
dataset["train"],
batch_size=32,
shuffle=True,
collate_fn=collate_fn
)
```
## Model Performance
Models trained on this dataset achieve the following performance:
| Model | Input Size | mIoU (val) | mIoU (test) |
|-------|------------|------------|-------------|
| MobileNetV2 (alpha=0.35) | 224x224 | 0.9894 | 0.9826 |
| MobileNetV2 (alpha=0.35) | 256x256 | 0.9902 | 0.9819 |
*mIoU = Mean Intersection over Union between predicted and ground truth quadrilaterals*
## Citation
If you use this dataset in your research, please cite this dataset and the original source datasets:
```bibtex
@dataset{doccornerdataset2025,
author = {mapo80},
title = {DocCornerDataset: Document Corner Detection Dataset},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/mapo80/DocCornerDataset}
}
```
**Please also cite the original datasets used:**
- MIDV-500/MIDV-2019 (Arlazarov et al., 2019)
- SmartDoc (Burie et al., 2015)
- COCO (Lin et al., 2014)
## License
⚠️ **This dataset is compiled from multiple sources with different licenses.**
| Source | License |
|--------|---------|
| MIDV-500/MIDV-2019 | Research use only |
| SmartDoc | Research use only |
| COCO | CC BY 4.0 |
| Roboflow datasets | Various (check individual datasets) |
**Before using this dataset, please review the licenses of the original datasets:**
- [MIDV-500](http://l3i-share.univ-lr.fr/MIDV500/)
- [MIDV-2019](http://l3i-share.univ-lr.fr/MIDV2019/)
- [SmartDoc](https://smartdoc.univ-lr.fr/)
- [COCO](https://cocodataset.org/#termsofuse)
- [Roboflow Universe](https://universe.roboflow.com/) (check individual datasets)
## Acknowledgments
This dataset was created by combining and processing multiple open-source datasets. We thank the authors of MIDV, SmartDoc, COCO, and the Roboflow community for making their data available.
## Related Projects
- [DocCornerNet](https://github.com/mapo80/DocCornerNet-CoordClass) - Document corner detection model trained on this dataset
|