The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 236, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column(/annotations/[]/segmentation) changed from array to object in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 93, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 250, in _generate_tables
batch = json_encode_fields_in_json_lines(original_batch, json_field_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 90, in json_encode_fields_in_json_lines
examples = [ujson_loads(line) for line in original_batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
LabOS Segmentation Dataset
A curated instance segmentation dataset of laboratory equipment that foundation models (SAM,Gemini,YoloWorld,Grounded) consistently struggle with and used to cover their gaps — including vortex genies, eppendorf tubes, multi-tube racks, colored caps, and fine-grained sub-parts like rack holes, tube tops, and mixer plates.
Annotations are provided in both COCO JSON and YOLO polygon formats.
Why This Dataset?
General-purpose vision models fail on lab equipment for several reasons:
- Repetitive, nearly-identical sub-objects — racks with dozens of uniform holes challenge, most foundation models have failed at, both detection and counting.
- Transparent / translucent materials — eppendorf tubes and caps have subtle visual boundaries.
- Fine-grained part segmentation — distinguishing a vortex genie top plate from its body, or an orange cap top from its barrel, requires part-level understanding that VLMs lack.
- Domain specificity — lab bench imagery is severely underrepresented in web-scraped pre-training data.
Dataset Statistics
Split Summary
| Split | Images | Annotations |
|---|---|---|
| Train | 228 | 2,736 |
| Validation | 57 | 579 |
| Total | 285 | 3,315 |
Split ratio: ~80 / 20 (train / val).
Annotations per Category
| Category | Train | Val | Total |
|---|---|---|---|
| 14ml rack hole | 1,263 | 59 | 1,322 |
| rack 50ml hole | 506 | 258 | 764 |
| 50ml eppendorf tube | 182 | 67 | 249 |
| 50Ml eppendorf orange cap | 108 | 34 | 142 |
| 14ml round bottom tube top | 172 | 7 | 179 |
| 50Ml eppendorf orange cap top | 91 | 30 | 121 |
| 50Ml rack | 66 | 31 | 97 |
| Vortex Genie 2 | 72 | 21 | 93 |
| Vortex Genie Top Plate | 59 | 14 | 73 |
| Vortex Genie Hole | 54 | 14 | 68 |
| 50Ml eppendorf cap | 47 | 3 | 50 |
| 50Ml eppendorf blue cap | 26 | 22 | 48 |
| 50Ml eppendorf cap top | 40 | 3 | 43 |
| 14ml rack | 33 | 2 | 35 |
| 50Ml eppendorf blue cap top | 17 | 14 | 31 |
| Total | 2,736 | 579 | 3,315 |
File Structure
dataset-2/
├── images/ # 285 PNG images (1280×720)
├── labels/ # polygon segmentation (.txt, one per image)
├── annotations.json # COCO format — all images
├── annotations_train.json # COCO format — training split
├── annotations_val.json # COCO format — validation split
├── dataset.yaml # dataset config
└── demo_imgs/ # Annotated visualization examples
Annotation Format
COCO JSON — bounding boxes + polygon segmentation masks per instance.
YOLO TXT — one file per image, each line:
<class_id> x1 y1 x2 y2 ... xN yN
Coordinates are normalized to [0, 1]. Annotations were created and exported from CVAT.
Example Visualizations
Small scene — vortex hole present, 7 classes (8 instances)
Full lab scene — vortex genie + 14ml rack + 50ml tubes (113 instances)
Vortex genie + 14ml rack with holes and tube tops (44 instances)
50ml rack — blue and orange caps, rack holes, no vortex (16 instances)
Vortex top plate + orange caps + rack holes (36 instances)
Dense 50ml rack — blue, orange & generic caps with rack holes (81 instances)
Vortex genie + orange caps, no rack holes (27 instances)
Blue caps focus — rack holes and tube bodies (42 instances)
14ml rack + vortex genie — large annotation count (130 instances)
Pre-trained Weights
segment-yolo-weights.pt — YOLO segmentation model trained on this dataset. Load with:
from ultralytics import YOLO
model = YOLO("segment-yolo-weights.pt")
results = model("images/1.png")
License
MIT — see license field above.
- Downloads last month
- 739








