PillChecker NER Benchmark
Benchmark dataset for evaluating Named Entity Recognition (NER) models on pharmaceutical packaging text.
Dataset Description
500 synthesized pack-label texts generated from the MattBastar/Medicine_Details dataset, designed to simulate OCR output from photos of pill packaging.
Each case contains:
id: Unique case identifiercategory:single_ingredient,dual_ingredient, ormulti_ingredientocr_text: Synthesized pharmaceutical label text (clean or with OCR noise)expected_names: Ground-truth list of active pharmaceutical ingredientssource_composition: Original composition string from source dataset
Use Case
This dataset tests whether NER models can extract active pharmaceutical ingredients from short, formulaic packaging text — a domain significantly different from biomedical literature.
Benchmark Rules
This dataset is the canonical location for benchmark input cases only. Benchmark result history belongs in hf://buckets/SPerva/pillchecker-experiments/benchmark-results/, not in this dataset repository.
Current records contain id, category, ocr_text, expected_names, and source_composition. Before using this dataset for entity-linking or interaction claims, add expected_rxcuis, clean_text, expected_interactions, and known-safe pairs as described in the GitHub repo's AGENTS.md and eval/README.md.
Baseline Results
Historical OpenMed baseline results are stored in the PillChecker experiments bucket. Project-facing result claims should cite a bucket run manifest and Git commit.
Source
Part of the PillChecker project.
- Downloads last month
- 26